首页 > > 详细

代写FIT5196-S1-2025 Assessment 2帮做Python程序

项目预算:   开发周期:  发布时间:   要求地区:

FIT5196-S1-2025 Assessment 2

This is a group assessment and is worth 40% of your total mark for FIT5196.

Due date: Friday 23 May 2025, 11:55pm

Task 1. Data Cleansing (21/40)

For this assessment, you are required to write Python code to analyse your dataset, find and fix the problems in the data. The input and output of this task are shown below:

Table 1. The input and output of task 1

Input files

Submission

Output files

Other Deliverables

Group_dirty_data .csv

Group_outlier_da ta.csv

Group_missing_d ata.csv

branches.csv edges.csv

nodes.csv

Group_dirty_data_sol ution.csv

Group_outlier_data_s olution.csv

Group_missing_data_ solution.csv

Group_ass2_task 1.ipynb

Group_ass2_task 1.py

Note1: All files must be zipped into a file named Group_ass2.zip (please use zip not rar, 7z, tar, etc.)

Note2: Replace with your group id (do not include <>)

Note3: You can find all your input files from the folder with your group number here. Using the wrong files will result in zero marks.

Note4:  Please strictly follow the instructions in the appendix to generate the .ipynb and

.py files.

Exploring and understanding the data is one of the most important parts of the data wrangling process. You are required to perform. graphical and/or non-graphical EDA methods to understand the data first and then find the data problems. In this assessment, you have been provided with three data inputs along with 3 additional files: branches.csv, edges.csv and nodes.csv here. Due to an unexpected scenario, a portion of the data is missing or contains anomalous values. Thus, before moving to the next step in data analysis, you are required to perform the following tasks:

1. Detect and fix errors in Group_dirty_data.csv

2. Impute the missing values in Group_missing_data.csv

3. Detect and remove outlier rows in Group_outlier_data.csv

○    (w.r.t. the delivery_fee attribute only)

Project Background

As a starting point, here is what we know about the dataset in hand:

The dataset contains Food Delivery data from a restaurant in Melbourne, Australia.  The restaurant has three branches around the CBD area. All three branches share the same menu but they have different management so they operate differently.

Each instance of the data represents a single delivery order. The description of each data column is shown in Table 2.

Table 2. Description of the columns

COLUMN DESCRIPTION

order_id A unique id for each order

date The date the order was made, given in YYYY-MM-DD format

time The time the order was made, given in hh:mm:ss format

order_type A categorical attribute representing the different types of orders namely: Breakfast, Lunch or Dinner

branch_code A categorical attribute representing the branch code in which the order was made. Branch information is given in the branches.csv file.

order_items A list of tuples representing the order items: the first element of the tuple is the item ordered, and the second element is the quantity ordered for that item.

order_price A float value representing the order total price

customer_lat Latitude of the customer coming from the nodes.csv file

customer_lon Longitude of the customer coming from the nodes.csv file

customerHasloyalty? A logical variable denoting whether the customer has a loyalty card with the restaurant (1 if the customer has loyalty and 0 otherwise)

distance_to_customer_KM A float representing the shortest distance, in kilometres, between the branch and the customer nodes with respect to the nodes.csv and the edges.csv files. Dijkstra algorithm can be used to find the shortest path between two nodes in a graph. Reading materials can be found here.

delivery_fee A float representing the delivery charges of the order

Notes:

1.   The output csv files must have the exact same columns as the respective input files. Any misspelling or mismatch will lead to a malfunction of the auto-marker which will in turn lead to losing marks.

2.   In the file Group_id>_dirty_data.csv, any  row can carry no more than one anomaly. (i.e. there can only be up to one issue in a single row.)

3.  All anomalies in dirty data have one and only one possible fix.

4.   There are no data anomalies in the file Group_outlier_data.csv except for outliers. Similarly,  there  are  only  coverage  data  anomalies  (i.e.  no  other  data  anomalies)  in Group_missing_data.csv.

5.   There are three types of meals:

○    Breakfast - served during morning (8am - 12pm),

○    Lunch - served during afternoon (12:00:01pm - 4pm)

○    Dinner - served during evening (4:00:01pm - 8pm)

Each meal has a distinct set of items in the menu (ex: breakfast items can't be served during lunch or dinner and so on).

6.   In order to get the item unit price, a useful python package to solve multivariable equations is

numpy.linalg

7. Delivery fee is calculated using a different method for each branch. All branches serve Melbourne customers ONLY.

The fee depends linearly (but in different ways for each branch) on:

a.    weekend or weekday (1 or 0)

b.    time of the day (morning 0, afternoon 1, evening 2)

c.    distance between branch and customer

It is recommended to use sklearn.linear_model.LinearRegression for solving the linear model as demonstrated in the tutorials. No need to set the variables as categorical variables when modelling, just treat the discrete numbers as continuous variables.

8.   Using proper data for model training is crucial to have a good linear model (i.e. R2  score

over 0.95 and very close to 1) to validate the delivery fee. The better your model is, the more accurate your result will be.

9. If a customer has loyalty, they get a 50% discount on delivery fee

10. The  restaurant  uses  the Djikstra   algorithm to   calculate  the  shortest   distance  between customer and restaurant. (explore networkx python package for this or alternatively find a way to implement the algorithm yourself)

11. The branch and customer nodes are provided in branches.csv, edges.csv and nodes.csv at here.

12. The below columns are error-free (i.e. don’t look for any errors in dirty data for them):

order_id

time

○    the numeric quantity in order_items

delivery_fee

13. For missing data imputation, you are recommended to try all possible methods to impute

missing values and keep the most appropriate one that could provide the best performance.

14. As EDA is part of this assessment, no further information will be given publicly regarding the    data. However, you can brainstorm with the teaching team during tutorials or on the Ed forum.

15. No libraries/packages restriction.

Methodology (3/21)

The   report Group<group_id>_ass2_task1.ipynb should   demonstrate   the   methodology (including all steps) to achieve the correct results for all three files.

You need to demonstrate your solution using correct steps.

●    Your solution should be presented in a proper way including all required steps.

●    You need to select and use the appropriate Python functions for input, process and output.

●    Your   solution   should   be   an   efficient   one   without   redundant   operations   and unnecessary reading and writing the data.

Documentation (1.5/21)

The cleaning task must be explained in a well-formatted report (with appropriate sections and  subsections). Please remember that the report must explain the complete EDA to examine the data, your methodology to find the data anomalies and the suggested approach to fix those anomalies.

The report should be organised in a proper structure to present your solutions with clear and meaningful titles for sections and subsections or sub-subsection if needed.

●    Each step in your solution should be clearly described and justified. For example, you can write to explain your idea of the solution, any specific settings, and the reason for using a particular function, etc.

●    Explanation of your results including all intermediate steps is required. This can help the marking team to understand your solution and give partial marks if the final

results are not fully correct.

●    All your codes need proper (but not excessive) commenting.

Task 2: Data Reshaping (9/40)

You need to complete task 2 with the suburb_info.xlsx file ONLY. With the given property and suburb related data, you need to study the effect of different normalisation/transformation (e.g. standardisation, min-max normalisation, log, power, box-cox transformation) methods on these columns: number_of_houses,        number_of_units, population,       aus_born_perc, median_income, median_house_price. You need to observe and explain their effect assuming we want to develop a linear model to predict the “median_house_price” using the 5 attributes mentioned above.

When reshaping the data, we normally have two main criteria.

●    First, we want our features to be on the same scale; and

●    Second, we want our features to  have as much linear relationship as possible with the target variable (i.e., median_house_price).

You need to first explore the data to see if any scaling or transformation is necessary (if yes why? and  if not, also why?) and then  perform.  appropriate actions and document your results and observations. Please  note  that the aim for this task  is to prepare the data for   a  linear regression model, it’s not building the linear model. That is, you need to record all your steps from load the raw data to complete all the required transformations if any.

Input files

Submission

suburb_info.xlsx

Group_ass2_task2.ipynb

You could consider the scenario of task 2 to be an open exploratory project: Jackie and Kiara have got some funding to do an exploratory consulting project on the property market. We wish to  understand  any  interesting  insights  from  the  relevant  features  in  different  suburbs  of Melbourne. Before we step into the final linear regression modelling stage, we wish to hire you to prepare the data for us and tell us if any transformation/normalisation is required? Will those data satisfy the assumptions of linear regression? How could we make our data more suitable for the latter modelling stage.

As an exploratory task,  you only need to  put  your  journey   of  exploration  in  proper documentation in your .ipynb file, no other output file to be submitted for task 2. We will mark based on the .ipynb content for task 2.

Table3. Description of the suburb_info.xlsx file.

suburb

The suburb name, which serves as the index of the data

number_of_houses

The number of houses in the property suburb

number_of_units

The number of units in the property suburb

municipality

The municipality of the property suburb

aus_born_perc

The percentage of the Australian-born population in the property suburb

median_income

The median income of the population in the property suburb

median_house_price

The median ‘house’ price in the property suburb

population

The population in the property suburb

Documentation (1.5/9)

The cleaning task must be explained in a well-formatted report (with appropriate sections and  subsections). Please remember that the report must explain the complete EDA to examine the data, your methodology to find the data anomalies and the suggested approach to fix those

anomalies.

The report should be organised in a proper structure to present your solutions with clear and meaningful titles for sections and subsections or sub-subsection if needed.

●    Each step in your solution should be clearly described and justified. For example, you can write to explain your idea of the solution, any specific settings, and the reason for using a particular function, etc.

●    Explanation of your results including all intermediate steps is required. This can help the marking team to understand your solution and give partial marks if the final

results are not fully correct.

●    All your codes need proper (but not excessive) commenting.

Task 3: Declaration and Interview(10/40)

Input files

Submission

Declaration_GroupXXX.ipynb

Group_ass2_task3.pdf

Group_AI_Records.docx/pdf/txt

3.1 Generative AI Tools Declaration Form(Hurdle)

Task Details: For this task, all students must complete the Generative AI Tools Declaration Form and  include the following statement clearly  in the submission (either download the ipynb file provided or generate you own one):

We ___(Student Name and ID)__, the member(s) of Group _______, claim that we DO/DO NOT use any Generative AI tools to complete this assessment.

If your group used Generative AI tools, you must clearly document and attach all conversation records with these tools as part of your submission.

After completing the form, download it as a PDF and include it in the submission.

Requirement:

This task must be completed (HURDLE)

Conversation records should be all English

All conversations need to be recorde. If you are unable to download or use multiple conversations, copy and paste them into 1 single file.

Consequences of Missing Generative AI tools Declaration

Failure to submit a complete Generative AI Tools Declaration Requirement and (if

applicable) AI conversation records, will result in not meeting the hurdle requirement for Assessment 2.

3.2 Interview(10/40 + Hurdle)

There will be an interview for your A2. The aim for the interview is to check your

understanding of your entire A2 work and make sure all submissions are compliant with the academic integrity requirements of Monash.

Task Details:

Time/Date: Week 12, during your allocated Applied sessions

Form.: 1-on-1, i.e., one TA interview 1 Group


Duration: Approximate 5-10 minutes per group

●    Location: Normal location of allocated applied sessions in your Allocate+ records

Arrangement: We will provide a time schedule for every group during their

allocated session, please arrive at your allocated time slot. If you arrive earlier, please wait patiently outside the room.

●   Content: You will be asked questions related to your A2 submission (code, methodology, specific functions, etc)

Criterion: Please refer to A2 marking rubrics

Requirement:

●    Mandatory attendance (HURDLE)

●    Both members need to show and sign the attendance sheet

●    Both members need to answer questions

Consequences of Non-Attendance:

Failure to attend the presentation or inability to satisfactorily demonstrate your work will  result in not meeting the hurdle requirements for Assignment A2. Consequently, you will receive ZERO for Assessment 2.

The following excuses will not be accepted:

●    Forget to come to the applied session

●    Forget to prepare for the interview, i.e. forget your own solution

Too limited time to answer the questions properly

●    Be too nervous to talk in English thus not properly answering the questions

●    Direct use online resources without proper reference and do not understand the

submitted work


软件开发、广告设计客服
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-23:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 9951568
© 2021 www.rj363.com
软件定制开发网!