Skip to main content

Exercises: OESMN (Obtaining, Scrubbing, Exploring, Modeling, iNterpreting)

As part of the Data science is OSEMN module for Obtaining Data I worked through the exercises.

Example Code


 # Exercises  
 """http://people.duke.edu/~ccc14/sta-663/DataProcessingSolutions.html#exercises"""  
   
 """1. Write the following sentences to a file “hello.txt” using open and write.   
 There should be 3 lines in the resulting file.  
   
 Hello, world.  
 Goodbye, cruel world.  
 The world is your oyster."""  
   
 str = 'Data\Test.txt'  
   
 f = open(str, 'w')  
 f.write('Hello, world.\r')  
 f.write('Goodbye, cruel world.\r')  
 f.write('The world is your oyster.')  
   
 # Writes same thing, only in one statement  
 f.write('\rHello, world.\rGoodbye, cruel world.\rThe world is your oyster.')  
   
 f.close()  
   
 with open(str, 'r') as f:  
   content = f.read()  
   print(content)  
   
   
 """2. Using a for loop and open, print only the lines from the   
 file ‘hello.txt’ that begin wtih ‘Hello’ or ‘The’."""  
   
 for line in open(str, 'r'):  
   if line.startswith('Hello') or line.startswith('The') :  
     print(line)  
   
   
 """3. Most of the time, tabular files can be read corectly using   
 convenience functions from pandas. Sometimes, however, line-by-line processing   
 of a file is unavoidable, typically when the file originated from   
 an Excel spreadsheet. Use the csv module and a for loop to create   
 a pandas DataFrame for the file ugh.csv."""  
   
 # Reading a csv line by line  
 import pandas as pd  
 with open('Data\OECD - Quality of Life.csv') as f:  
   rowCount = 0  
   for line in csv.reader(row for row in f):  
     if rowCount == 0:  
       tempDf = pd.DataFrame(index=None, columns=line)  
       rowCount = 1;  
     else:  
       for i in range(len(line)):  
         tempDf.set_value(newDf.columns[i], rowCount, line[i])  
       rowCount += 1  
   tempDf  
   
 # the easy way  
 otherDf = pd.read_csv('Data\OECD - Quality of Life.csv')  
   

Sample Data

Popular posts from this blog

Decision Tree in R, with Graphs: Predicting State Politics from Big Five Traits

This was a continuation of prior explorations, logistic regression predicting Red/Blue state dichotomy by income or by personality. This uses the same five personality dimensions, but instead builds a decision tree. Of the Big Five traits, only two were found to useful in the decision tree, conscientiousness and openness.

Links to sample data, as well as to source references, are at the end of this entry.

Example Code

# Decision Tree - Big Five and Politics library("rpart") # grow tree input.dat <- read.table("BigFiveScoresByState.csv", header = TRUE, sep = ",") fit <- rpart(Liberal ~ Openness + Conscientiousness + Neuroticism + Extraversion + Agreeableness, data = input.dat, method="poisson") # display the results printcp(fit) # visualize cross-validation results plotcp(fit) # detailed summary of splits summary(fit) # plot tree plot(fit, uniform = TRUE, main = "Classific…

Chi-Square in R on by State Politics (Red/Blue) and Income (Higher/Lower)

This is a significant result, but instead of a logistic regression looking at the income average per state and the likelihood of being a Democratic state, it uses Chi-Square. Interpreting this is pretty straightforward, in that liberal states typically have cities and people that earn more money. When using adjusted incomes, by cost of living, this difference disappears.

Example Code
# R - Chi Square rm(list = ls()) stateData <- read.table("CostByStateAndSalary.csv", header = TRUE, sep = ",") # Create vectors affluence.median <- median(stateData$Y2014, na.rm = TRUE) affluence.v <- ifelse(stateData$Y2014 > affluence.median, 1, 0) liberal.v <- stateData$Liberal # Solve pol.Data = table(liberal.v, affluence.v) result <- chisq.test(pol.Data) print(result) print(pol.Data)
Example Results
Pearson's Chi-squared test with Yates' continuity correction data: pol.Data X-squared = 12.672, df …

Mean Median, and Mode with R, using Country-level IQ Estimates

Reusing the code posted for Correlations within with Hofstede's Cultural Values, Diversity, GINI, and IQ, the same data can be used for mean, median, and mode. Additionally, the summary function will return values in addition to mean and median, Min, Max, and quartile values:

Example Code
oecdData <- read.table("OECD - Quality of Life.csv", header = TRUE, sep = ",") v1 <- oecdData$IQ # Mean with na.rm = TRUE removed NULL avalues mean(v1, na.rm = TRUE) # Median with na.rm = TRUE removed NULL values median(v1, na.rm = TRUE) # Returns the same data as mean and median, but also includes distribution values: # Min, Quartiles, and Max summary(v1) # Mode does not exist in R, so we need to create a function getmode <- function(v) { uniqv <- unique(v) uniqv[which.max(tabulate(match(v, uniqv)))] } #returns the mode getmode(v1)
Example Results
> oecdData <- read.table("OECD - Quality of L…