首页 > > 详细

辅导Python编程|解析C/C++编程|解析Haskell程序|辅导留学生Prolog

项目预算:   开发周期:  发布时间:   要求地区:
Don’t be Sentimental!
Due: Monday February 15, 2021 @ 6 a.m. - pushed to Github and release issued.
Introduction
Have you ever read a tweet and thought, “Gee, what a positive outlook!” or “Wow, why so negative, friend?” Can computers make the same determination? They can surely try!

In Machine Learning, the task of assigning a label to a data item is called classification (putting things into different classes or categories). The more specific name for what we’re going to do is sentiment analysis because you’re trying to determine the “sentiment” or attitude based on the words in a tweet. So, Project 1 is to build a sentiment classifier! Aren’t you excited?? ( ← That would be positive sentiment!)

You’ll be given a set of tweets that are already pre-classified as positive or negative based on their content. You’ll analyze the word frequency patterns among all of those tweets to develop a classification algorithm. Using your classification algorithm, you’ll then classify another set of tweets to determine if they are positive or negative.
Building a Classifier
The goal in classification is to assign a class label to each element of a data set. Of course, we would want this done with the highest accuracy possible. For this project, we will have only two classes or labels: positive sentiment and negative sentiment. At a high level, the process to build a classifier (and many other machine learning models) is this:
1.Train
○Use a training data set with pre-classified members.
○Assume you have 10 tweets and each is pre-classified with + or - sentiment. How might you go about analyzing the words in the tweets to find words more commonly associated with negative sentiment and words more commonly associated with positive sentiment?
○The result of the training step will be two lists of words: 1 list for positive words and 1 list for negative words
2.Test
○Now, you give your classifier un-labeled tweets from a testing data set and ask it to output the class it determines.
○But behind the scenes, you already know what class each tweet actually belongs to.
○Compare the predicted class of each tweet (the output of your classifier) and the actual class of each tweet and determine the accuracy. In other words, how correct was your classifier?
The Real Data
The data set we will be using in this project comes from real tweets posted around 11-12 years ago. The original data was retrieved from Kaggle at https://www.kaggle.com/kazanova/sentiment140. I’ve pre-processed it into the file format we are using for this project. For more information, please see
Go, A., Bhayani, R. and Huang, L., 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1(2009), p.12.


Input files
There will be 3 different input files:
1.Training Data
2.Testing Data (no sentiment column)
3.Testing (id and sentiment for testing data for you to compare against).

The training data set is formatted as follows:
●A comma-separated-values (CSV) file containing a list of tweets, each one on a separate line. Each line of the data files include the following fields:
○Sentiment value (negative = 0, positive = 4),
○the tweet id,
○the date the tweet was posted
○Query status (you will ignore this column)
○the twitter username that posted the tweet
○the text of the tweet itself

The testing data set is broken into two files:
●A CSV file containing formatted just like the training data EXCEPT no Sentiment column
●A CSV file containing tweet ID and sentiment for the testing dataset (so you can compare your predictions of sentiment to the actually sentiment ground truth)

Below are two example tweets from the training dataset:
4,1467811594,Mon Apr 06 22:20:03 PDT 2009,NO_QUERY,peruna_pony,"Beat TCU"
0,1467811595,Mon Apr 06 22:22:03 PDT 2009,NO_QUERY,the_frog,"Beat SMU"

Here are two tweets from the testing dataset:
1467811596,Mon Apr 06 22:20:03 PDT 2009,NO_QUERY,peruna_pony,"SMU > TCU"

The sentiment file for that testing tweet would be:
4, 1467811596
Output Files
There will be one output file organized as follows::
●The first line of the output file will contain the accuracy, a single floating point number with exactly 3 decimal places of precision. See the section “How good is your classifier” below to understand Accuracy.
●The remaining lines of the file will contain the Tweet IDs of the tweets from the testing data set that your algorithm incorrectly classified.

Example of the Testing Data tweet classifications file (These tweet IDs are fake):
0.500
2323232323
1132553423
Running your Program
Your program will have two modes
●Running Catch TDD tests
○this will be indicated by passing no arguments to the executable
○this is explained more below.
●Training & Testing the Classifier
○this mode will have 4 command line arguments:
■training data set filename - the file with the training tweets
■testing data set filename - tweets that your program will classify
■testing data set sentiment filename - the file with the classes for the testing tweet data
■ouput file name - see Output Files section above
■Example:
./classifier.out
Training Your Classification Algorithm
You’ll use word frequencies to train your classification algorithm. For the tweets in the testing data set, you’ll calculate word frequencies for the positive tweets and for the negative tweets. You might expect a few things to happen:
●certain words might appear more frequently in negative tweets. Other words might appear more frequently in positive tweets.
●certain words might appear frequently in both positive and negative tweets. Are these useful at all?
You might also consider the possibility that some people are more positive than other people. Could you use username for classification?

As you develop your classification algorithm, you can use the output file containing the incorrectly classified tweets to refine your solution.
Implementation Requirements
You must implement your own custom string class as part of this project. You may not use the STL string class or any other available string class from the Internet. Additionally, you may NOT use c-strings (null-terminated character arrays) except to implement your custom string class and as a temporary buffer when reading from a file. More on the requirements of this custom class will be in a separate handout.

You will also need to implement a suite of tests for the string class using the CATCH TDD library. More on this in a separate handout in the first week of lab. You’ll provide a testing mode (the mode with no args) which will execute your tests.
Comments about this Project
This is your opportunity to make a great first impression on the Prof/TAs in CSE 2341. We will be looking for simple, elegant, well-designed solutions to this problem. Please do not try to “wow” us with your knowledge of random, esoteric programming practices. Here is a list of things that you might want to consider while tackling this project:
●Procedural vs Object Oriented Design
oA seemingly infinite amount of software has been designed, implemented, deployed, maintained, updated, and redeployed using both of these paradigms. One could argue for days, or week even, about which is the “better” paradigm for modern software development. Regardless of which paradigm you choose to use, the most important thing is that you produce an elegant solution following solid software development practices.
●File input and output
oIt is so important to be able to read from and write to data files. Think about some software program that doesn't use files...
●Just the right amount of comments in just the right places
●Minimal amount of code in the driver method (main, in the case of C++)
oThe code in main should be minimal and only used to “get the ball rolling.”
●Proper use of command-line arguments
●Proper memory management
How Good is Your Classifier?
Training and Testing of a classification algorithm is an iterative process. You’ll develop a training algorithm, test it, evaluate its performance, tweak the algo, retrain, retest, etc. How do you know how good your classifier is after each development iteration, though? We will use accuracy as the metric for evaluation.
(total number of correctly classified tweets from test dataset)
Accuracy = --------------------------------------------------------------------
(total number of tweets in the test data set)

Why should you be interested in this? The TAs will take your final solutions and classify a set of tweets that your algorithm has never seen. They will calculate the accuracies (as defined above) for each submission. The 10 submissions with the highest accuracies will get 10 bonus points.
What to Submit
During lab in the coming weeks, you will cover a basic introduction to GitHub and how to use it to submit your projects. You will need to push your code (including your tests) to Github and create a new release by February 10, 2020 at 6am.
Grading Rubric
Points Possible Points Awarded
String Class 20
CATCH Tests 5
Training Algo Implementation 25
Classification Algo Implementation 25
Source Code Quality (formatting, comments, etc. 15
Proper use of Git and Github 10

You forgot to disable document editing :) - Nate was here!

软件开发、广告设计客服
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-23:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 9951568
© 2021 www.rj363.com
软件定制开发网!