Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import pandas as pd | |
import pickle | |
# Open pickle file and load data: | |
with open('ratings.pickle', 'rb') as f: | |
ratings_data = pickle.load(f) | |
# Print ratings_data | |
print(ratings_data) | |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import numpy as np | |
import pandas as pd | |
# First let's save the link to the excel file | |
battle_link = 'https://assets.datacamp.com/production/repositories/487/datasets/5e8897e4624f8577ed0d33aeafbe7bd88bfc424b/battledeath.xlsx' | |
# Next let's load the file into an io.excel.base file | |
xls = pd.ExcelFile(battle_link) | |
# Let's see its type | |
print(type(xls)) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Let's wrap the import in a try-catch block | |
try: | |
data = np.genfromtxt('titanic.csv', delimiter=',', names=True, dtype=None, encoding='utf8') | |
except Exception as e: | |
print(type(e)) | |
# Let's see the first five rows of the data ID array | |
data[:5] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# First, let's import numpy. | |
import numpy as np | |
# Next, let's save the titanic_df we loaded before, as titanic.csv file. | |
titanic_df.to_csv('titanic.csv', index=False) | |
# Next, let's load titanic.csv as a numpy array, | |
titanic_arr = np.loadtxt('titanic.csv', delimiter = ',', skiprows=1, usecols=[0,1,4]) | |
# I passed the following parameters:- |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import seaborn as sns | |
titanic_df = sns.load_dataset('titanic') | |
titanic_df.head() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |