Menu

Data Science Dev

Design, develop, deploy

Data science dev blog

Tools to Annotate Images

Here is a link to many manual tools for labeling and annotating images. 

https://en.wikipedia.org/wiki/List_of_manual_image_annotation_tools

Ratsnake seems good for image segmentation.

 

 

Docker on Ubuntu Tutorial

I experimented with Docker lately.  Here are a few commands, notes and issues I resolved while working on Ubuntu 16.04.  I'm assuming you have docker installed (sudo apt-get update && apt-get install docker-ce)


Tutorial on the web for "dockerizing" python applications.https://runnable.com/docker/python/dockerize-your-python-application


Some python example code I made to test with (imgtest.py).  This is the code I wanted to place in a docker image.  I basically makes a random image and saves it.

import numpy as np
from PIL import Image
import os
 
img = np.random.randint(0,255,size=(1024,768))
img = img.astype('float')
img = Image.fromarray(img)
img = img.convert('RGB')
img.save('/app/test.jpg')
 

Here is an example Dockerfile I used.  I saved as Dockerfile and placed in same folder as my python script.

FROM python:2
 
ADD imgtest.py /
 
RUN pip install numpy
RUN pip install pillow
RUN mkdir /app
 
CMD ["python","./imgtest.py"]

To build the docker image I moved to the folder where both the Dockerfile and python script reside.  And then typed:
 
sudo docker build .  

When building, had to add the DNS to the following (from https://stackoverflow.com/questions/28668180/cant-install-pip-packages-inside-a-docker-container-with-ubuntu).  I tried other methods mentioned, but none of them seemed to work except this one.  I also commented out the DockerOpts line in /etc/default/docker.

"For Ubuntu users

You need to add new DNS addresses in the docker config

sudo nano /lib/systemd/system/docker.service

Add the dns after ExecStar.

--dns 10.252.252.252 --dns 10.253.253.253

Should look like that:

ExecStart=/usr/bin/dockerd -H fd:// --dns 10.252.252.252 --dns 10.253.253.253

systemctl daemon-reload sudo service docker restart

"


After your build you should be able to list the docker images available using the following command:

sudo docker images

 
 
I wanted to run the docker image and make a container.  If I ran it with:
 
sudo docker run -it <image_id>
 
It would just execute, create the random image output in the container, and then, the container would disappear with no random image output persisted in storage.  Instead, I used a bind mount.  This bind mount, lets me connect a container's directory to a host's directory.  The resulting random image is placed on the docker host.
 
sudo docker run -d -it -v <host directory>:<container directory> <image_id>
 
So on my computer it would look something like:
 
sudo docker run -d -it -v /home/npropes/Desktop:/app  4a823423bca
 
The random image output should appear on my Desktop.

Additional commands below:

I can see the list of docker containers running or have run by using this command:

sudo docker ps -a
 
To stop docker containers use these commands:
 
sudo docker stop <container>
sudo docker kill <container>
 
To delete a docker container:
 
sudo docker rm <container>
 
To delete an docker image:
 
sudo docker rmi <image>
 
 
 

State Space Reconstruction from Time Series Data - Embedding Dimension - Time Delay

I was experimenting with some features related to state space reconstruction trying to determine when a fault occurs from time series data.  Thought I would keep it here in case I needed to refer to it later.  The features are time delay and embedding dimension.  The code is below with Lorenz model generated data.  From the graphs, the time delay = 10 and the embedding dimension = 3.

import numpy as np
from sklearn.neighbors import NearestNeighbors
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

# plot histograms
def plot_histograms(xedges_1d,hist_1d,xedges_2d,yedges_2d,hist_2d):
    # plot histograms
    xedgesctr_1d = xedges_1d[0:len(xedges_1d)-1]+(xedges_1d[1:len(xedges_1d)] - xedges_1d[0:len(xedges_1d)-1])/2.0
    fig = plt.figure(1)
    fig.add_subplot(121)
    plt.bar(xedgesctr_1d,hist_1d)
    # plot 2d histogram
    xedgesctr_2d = xedges_2d[0:len(xedges_2d)-1]+(xedges_2d[1:len(xedges_2d)] - xedges_2d[0:len(xedges_2d)-1])/2.0
    yedgesctr_2d = yedges_2d[0:len(yedges_2d)-1]+(yedges_2d[1:len(yedges_2d)] - yedges_2d[0:len(yedges_2d)-1])/2.0
    ax1 = fig.add_subplot(122,projection='3d')
    _xx, _yy = np.meshgrid(xedgesctr_2d,yedgesctr_2d)
    xp,yp = _xx.ravel(), _yy.ravel()
    top = hist_2d.ravel()
    bottom = np.zeros_like(top)
    width=depth=2
    ax1.bar3d(xp,yp,bottom,width,depth,top)
    plt.show()


# compute delay for state space reconstruction
def compute_delay(data,bins,max_delay):
    MI = np.zeros((max_delay+1))
    for T in range(0,max_delay+1):
        xs = data[0:len(data)-T]
        ys = data[T:len(data)]

        # compute 1D histograms
        hist_1d, xedges_1d = np.histogram(data,bins=bins)
        norm_hist_1d = hist_1d/float(len(data))

        # compute 2D histogram and normalize
        hist_2d, xedges_2d, yedges_2d = np.histogram2d(xs,ys,bins=bins)
        norm_hist_2d = hist_2d/float(len(xs))

        # compute mutual information
        for i in range(0,len(xs)):
            # compute P(x) and P(y)
            x = xs[i]
            y = ys[i]
            xedge_left = xedges_1d[0:len(xedges_1d)-1]
            xedge_right = xedges_1d[1:len(xedges_1d)]                
            xi = np.argwhere((x >= xedge_left) & (x < xedge_right))
            yi = np.argwhere((y >= xedge_left) & (y < xedge_right))
            if len(xi) == 0:
                xi = len(xedge_left)-1
            else:
                xi = xi[0][0]
            if len(yi) == 0:
                yi = len(yedge_left)-1
            else:
                yi = yi[0][0]
            Px = norm_hist_1d[xi]
            Py = norm_hist_1d[yi]
            # compute P(x,y)
            xedge_left = xedges_2d[0:len(xedges_2d)-1]
            xedge_right = xedges_2d[1:len(xedges_2d)]
            yedge_left = yedges_2d[0:len(yedges_2d)-1]
            yedge_right = yedges_2d[1:len(yedges_2d)]
            xi = np.argwhere((x >= xedge_left) & (x < xedge_right))
            yi = np.argwhere((y >= yedge_left) & (y < yedge_right))
            if len(xi) == 0:
                xi = len(xedge_left)-1
            else:
                xi = xi[0][0]
            if len(yi) == 0:
                yi = len(yedge_left)-1
            else:
                yi = yi[0][0]
            Pxy = norm_hist_2d[xi,yi]
            # add to mutual information (shouldn't have to worry about divide by zero)
            MI[T] += Pxy*np.log2(Pxy/(Px*Py))

    # find first minimum of mutual information or use 1 if minimum is on right edge (noisy signal or small range of T)
    time_delay = 0
    mutual_info = MI[0]
    for T in range(1,max_delay+1):
        if mutual_info < MI[T]:
            break
        else:
            mutual_info = MI[T]
            time_delay = T
    if time_delay == max_delay:
        time_delay = 1

    # return result
    return MI, time_delay, xedges_1d, norm_hist_1d, xedges_2d, yedges_2d, norm_hist_2d

# compute embedding dimension
def compute_embdim(data,time_delay,threshold,max_dimension):
    # arrange data into time delay vectors
    x = np.zeros((len(data)-max_dimension*time_delay,max_dimension+1))
    for D in range(0,max_dimension+1):
        x[:,D] = data[D*time_delay:len(data)-((max_dimension-D)*time_delay)]

    # compute percentage of false nearest neighbors
    pfnn = np.zeros((max_dimension+1))
    for D in range(0,max_dimension):
        d1 = x[:,0:D+1]
        d2 = x[:,0:D+2]
        nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(d1)
        distances, indices = nbrs.kneighbors(d1)
        indices = indices[:,1]
        d1_nn = d1[indices,:]
        d2_nn = d2[indices,:]
        d1s = d1-d1_nn
        d2s = np.abs(d2-d2_nn)
        d2s = d2s[:,D+1]
        result = np.zeros((len(d1s)))
        # correction factor for zero distances
        cf = 0
        for i in range(0,len(d1s)):
            v = d1s[i,:]
            Rd = np.sqrt(np.matmul(v,v.T))
            result[i] = d2s[i]/Rd
            if result[i] == 0:
                cf = cf + 1
        result_fnn = (result > threshold).astype('float')
        pfnn[D] = np.sum(result_fnn)/float(len(result_fnn)-cf)

    # find embedding dimension which is when pfnn first goes to zero
    for i in range(0,len(pfnn)):
        embedding_dimension = i + 1
        if pfnn[i] < 0.01:
            break

    return pfnn, embedding_dimension
    
# lorenz equation simulated data
x = np.zeros(4000)
y = np.zeros(4000)
z = np.zeros(4000)
x[0] = -12.0
y[0] = 0.001
z[0] = 0.4
Ts = 0.01
for i in range(0,3999):
    x[i+1] = x[i] + Ts*16.0*(y[i]-x[i])
    y[i+1] = y[i] + Ts*(-x[i]*z[i] + 45.92*x[i] - y[i])
    z[i+1] = z[i] + Ts*(x[i]*y[i] - 4.0*z[i])

# compute time delay and embedding dimension

MI, time_delay, xedges_1d, norm_hist_1d, xedges_2d, yedges_2d, norm_hist_2d = compute_delay(x,int(np.sqrt(1000)),50)
pfnn, embedding_dimension = compute_embdim(x,time_delay,15.0,10)

print 'time delay = ' + str(time_delay)
print 'embedding dimension = ' + str(embedding_dimension)

plot_histograms(xedges_1d, norm_hist_1d, xedges_2d, yedges_2d, norm_hist_2d)

plt.subplot(121)
plt.plot(MI,marker='*')
plt.title('mutual information vs. time delay')
plt.subplot(122)
plt.plot(range(1,len(pfnn)+1),pfnn,marker='*')
plt.title('percentage false nearest neighbors vs. embedding dimension')

plt.show()

 

CNN + RNN TensorFlow Example Code

This is example code for a CNN + RNN structure used for analyzing time-series data.    There is a separate CNN structure for each time step of windowed data.  The RNN learns the time dependency between feature vectors extracted by the CNNs. 

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import math

# 1-d convolutional layer
def conv1d(X, num_filters=8, filter_width=3, stride=1, padding='SAME'):
    # helper function for a 1D convolutional filter
    # initalize filter
    window_size = int(X.get_shape()[1])
    num_sensors = int(X.get_shape()[2])
    stddev = 1
    f = tf.Variable(tf.truncated_normal((filter_width,num_sensors,num_filters),stddev=.2),trainable=True,name='conv1d_filter')
    # initialize bias
    b = tf.Variable(0.0,name='conv1d_bias')
    conv = tf.nn.conv1d(value=X,filters=f,stride=stride,padding=padding,name='conv1d_op')
    return tf.add(conv,b)

# print out graph structure
def print_graph():
    # prints the graph operations out
    with tf.Session() as sess:
        op = sess.graph.get_operations()
    for o in op:
        print o.outputs

# container to hold cnnrnn model structure
class cnnrnn_model:
    def __init__(self,time_steps,window_size,num_sensors,filters,filter_size,rnn_nodes):
        ###### model creation #############################################################
        # placeholders
        self.X = tf.placeholder(tf.float32,[None,time_steps,window_size,num_sensors],name='X')
        self.Y = tf.placeholder(tf.float32,[None,time_steps,1],name='Y')

        # create the convolutional layers for each CNN per time step
        m = []
        for i in range(0,time_steps):
            # batch, time_step, window_size, num_sensors
            m1 = conv1d(self.X[:,i,:,:], num_filters=filters*1, filter_width=filter_size, stride=1, padding='SAME')
            m1 = tf.nn.relu(m1,name='relu1d')
            m1 = tf.nn.pool(m1, window_shape=(4,), pooling_type='MAX', padding='SAME', strides=(4,), name='pool1d')
            m1 = conv1d(m1, num_filters=filters*1, filter_width=filter_size, stride=1, padding='SAME')
            m1 = tf.nn.relu(m1,name='relu1d')
            m1 = tf.nn.pool(m1, window_shape=(4,), pooling_type='MAX', padding='SAME', strides=(4,), name='pool1d')
            m1 = conv1d(m1, num_filters=filters*1, filter_width=filter_size, stride=1, padding='SAME')
            m1 = tf.nn.relu(m1,name='relu1d')
            m1 = tf.nn.pool(m1, window_shape=(2,), pooling_type='MAX', padding='SAME', strides=(2,), name='pool1d')
            m1 = conv1d(m1, num_filters=filters*1, filter_width=1, stride=1, padding='SAME')
            m1 = tf.nn.relu(m1,name='relu1d')
            m1 = tf.nn.pool(m1, window_shape=(2,), pooling_type='MAX', padding='SAME', strides=(2,), name='pool1d')
            sh1 = int(m1.get_shape()[1])
            sh2 = int(m1.get_shape()[2])
            m1 = tf.reshape(m1, [-1,1,sh1*sh2])
            m.append(m1)
            
        c = tf.concat(m,1)

        basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=rnn_nodes)
        model, states = tf.nn.dynamic_rnn(cell=basic_cell, inputs=c, dtype=tf.float32, time_major=False)

        self.model = tf.layers.dense(model,units=1,activation=None)

        self.loss = tf.losses.mean_squared_error(self.Y, self.model)

        optimizer = tf.train.AdamOptimizer(1e-3)
        self.training_op = optimizer.minimize(self.loss) 
        self.init = tf.global_variables_initializer()
        ###### end model creation #############################################################

# some simulated data to play with
def create_test_data(batch_size, time_steps, window_size, num_sensors):
    # create fake training data for testing neural net
    x = np.zeros((batch_size,time_steps,window_size,num_sensors))
    y = np.zeros((batch_size,time_steps,1)) # these are the outputs of the RNN + dense layer
    num_examples = batch_size + time_steps
    xe = np.zeros((num_examples,window_size,num_sensors))
    ye = np.zeros((num_examples,1))
    # normal case (no fault)
    for e in range(0,num_examples/2): # each example
        wn = 1
        d = 0.73
        c2 = -wn*wn
        c1 = -2*d*wn
        c3 = 1
        x1 = 0
        x2 = 0
        for s in range(0,window_size):  # each sample
            x1 = x1 + 0.4*x2
            x2 = x2 + 0.4*(c1*x2 +c2*x1 + c3)
            xe[e,s,0] = -x1*c2 + np.random.randn()*0.1
            xe[e,s,1] = x2 + np.random.randn()*0.1
            ye[e,0] = 1.0
    # fault case (damping coefficient changing)
    for e in range(num_examples/2,num_examples):
        i = e-num_examples/2
        wn = 1
        d = 0.72 - 0.3*float(i+1)/(num_examples/2)
        c2 = -wn*wn
        c1 = -2*d*wn
        c3 = 1
        x1 = 0
        x2 = 0
        for s in range(0,window_size):
            x1 = x1 + 0.4*x2
            x2 = x2 + 0.4*(c1*x2 +c2*x1 + c3)
            xe[e,s,0] = -x1*c2 + np.random.randn()*0.1
            xe[e,s,1] = x2 + np.random.randn()*0.1
            ye[e,0] = math.exp(-0.1*i)
    # reorganize data into timesteps
    for b in range(0,batch_size):
        for t in range(0,time_steps):    
            x[b,t,:,:] = xe[b + t,:,:]
            y[b,t,:] = ye[b + t,:]
    return x,y

###### model parameters ###########################################################
time_steps = 5
window_size = 64
num_sensors = 2 
filters = 6
filter_size= 3
rnn_nodes= 8

###### training parameters ########################################################
batch_size = 64
n_epochs = 501

###### create test data ###########################################################
trainX, trainY = create_test_data(batch_size, time_steps, window_size, num_sensors)

###### model creation #############################################################
model = cnnrnn_model(time_steps,window_size,num_sensors,filters,filter_size,rnn_nodes)

###### saver object to save and restore model variables ###########################
saver = tf.train.Saver()

###### model training #############################################################
with tf.Session() as sess:
    sess.run(model.init)       
    for e in range(0,n_epochs):
        sess.run(model.training_op, feed_dict={model.X: trainX, model.Y: trainY})
        loss_out = sess.run(model.loss, feed_dict={model.X: trainX, model.Y: trainY})
        if (e+1) % 100 == 1:
            print 'epoch = ' + str(e+1) + '/' + str(n_epochs) + ', loss = ' + str(loss_out)
        if (e+1) % 5000 == 1:
            print 'epoch = ' + str(e+1) + '/' + str(n_epochs) + ', loss = ' + str(loss_out)
    saver.save(sess,'/tmp/test-model')
    result = sess.run(model.model, feed_dict={model.X: trainX, model.Y: trainY})
    print result.T
###### end model training #########################################################

###### example restoring model ####################################################
tf.reset_default_graph()
with tf.Session() as sess:
    new_saver = tf.train.import_meta_graph('/tmp/test-model.meta')
    new_saver.restore(sess,tf.train.latest_checkpoint('/tmp/'))
    graph = tf.get_default_graph()
    X = graph.get_tensor_by_name("X:0")
    Y = graph.get_tensor_by_name("Y:0")
    model = graph.get_tensor_by_name('dense/BiasAdd:0')
    result = sess.run(model, feed_dict={X: trainX, Y: trainY})
    print result.T

Installing Tensorflow 1.3 / CUDA Toolkit 8.0 / cuDNN 6.0 on ASUS GL502VS-DS71 Laptop with Ubuntu 16.04 and Nvidia 1070

Image result for gl502vs-ds71

My ASUS GL502VS-DS71 laptop had some operating system problems recently so I decided to reinstall Ubuntu on it.  Somehow the Nvidia driver got updated to a more recent version (384.90) that didn't work with Tensorflow.  This is an excellent opportunity to refresh the installation procedure of Tensorflow in the blog.  If I installed the CUDA Toolkit 8.0 with the included Nvidia driver, it would not let me login to Ubuntu.  It kept returning to the login screen.  This is probably because the driver that comes with the CUDA Toolkit is too old to support the Nvidia 1070 card built into my laptop.  Therefore, we need to install a Nvidia driver that works with the 1070 card first, and then, install the CUDA Toolkit 8.0 without the included Nvidia driver.  The instructions are below:

1. Reboot computer and get into BIOS (delete/DEL key while restarting or other key)

2. If your motherboard has Secure Boot, turn it off/disable.  Save BIOS changes and reboot.

3. Install Ubuntu 16.04.x

4. After installation, open terminal.

5. sudo add-apt-repository ppa:graphics-drivers/ppa

6. sudo apt-get update

7. sudo apt-get upgrade

8. Open Software & Updates from launcher.

9. Select the Additional Drivers tab.

10. Select the Using NVIDIA binary driver - version 378.xx (I have 378.13)

11. Reboot computer.

12. Download NVIDIA CUDA Toolkit 8.0 (use .runfile only and you may have to search for this version in their archives since the newer version won't work with tensorflow) and NVIDIA cuDNN library 6.0 (again not the newest version 7.0) from NVIDIA's website.  You have to login for cuDNN libraries.  

13. Follow instructions on NVIDIA website to install cuda toolkit (and patch if available) but do not upgrade NVIDIA driver (you will be asked during .runfile execution) or change default install directories

14. Follow instructions on NVIDIA website to install cuDNN (i put it in my home directory in a folder called cuda)

15. Edit ~/.bashrc and add the following lines at the end of the file:
export LD_LIBRARY_PATH=~/cuda/lib64/:/usr/local/cuda-8.0/lib64/:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-8.0/bin:$PATH

16. Logout of Ubuntu and log back in

17. Open terminal.

17. sudo apt-get install libblas-dev liblapack-dev libjpeg-dev python-dev

18a. For python 2.7:
sudo apt-get install python-pip
sudo apt-get install idle
sudo pip install --upgrade pip
sudo pip install tensorflow-gpu (if you have a GPU)

19b. For python 3.5:
sudo apt-get install python3-pip
sudo apt-get install idle3
sudo pip3 install --upgrade pip
sudo pip3 install tensorflow-gpu (if you have a GPU)

20. Reboot

Installing Jetpack 3.1 on Jetson TX2 with Tensorflow 1.3 / Keras / hdf5 (.h5)

1. Download Jetpack 3.1 runfile from here on Host PC.

2. Run the installer "./JetPack-L4T-3.1-linux-x64.run" from terminal (you may need to navigate directory to location where downloaded).

3. Follow prompts and instructions to install.  The download may take a while and after it has completed, the instructions in a terminal will tell you to connect the Jetson to the Host PC in force recovery mode.  Make sure you connect an ethernet cable to the Jetson that is connected to the same router as the Host PC.

4. After Jetson has completed install, connect the Jetson HDMI to a monitor and a keyboard and mouse through the USB port (using a USB port expander dongle).  

5. Open a terminal in Ubuntu on the Jetson and enter the following commands (some may take a while to compile just be patient!).

6. sudo apt update

7. sudo apt-get install libblas-dev liblapack-dev python-dev idle nano python-pip

8. sudo pip install --upgrade pip

9. sudo pip install numpy

10. Download the wheel file provided by Peter Lee, https://github.com/peterlee0127/tensorflow-tx2

11. Install the wheel file by navigating to the directory of the downloaded wheel file and typing "sudo pip instaltensorflow-1.3.0-cp27-cp27mu-linux_aarch64.whl"

12. If you need .h5 file load and save, install "sudo apt-get install libhdf5-dev", and then install, "sudo pip install h5py"

13. If you need Keras, install "sudo pip install keras"

Flask / Windows IIS Setup

I wrote a little manual to help me remember how to setup Windows IIS with Python / Flask / WSGI to get a Web app going.  You can get it here

Multithreading TensorFlow / Keras Models

I made an example (with some code borrowed from the Web) of how you can multithread TensorFlow models in Python.  The key I found was to wrap any execution of a model predict function in a lock.  Here is a link to the files.  This is not for training of different models in parallel--only for after you have models already trained and want to use them in parallel threads.

To use follow these instructions:

1. Unzip files.

2. Open terminal:

    a. execute 'python mnist-tf1py' to create CNN model1

    b. execute 'python mnist-tf2.py' to create CNN model2

    c. change paths to models generated by steps 2a and 2b in ts.py

    d. execute 'python ts.py'

    e. execute 'python tc.py model1' or 'python tc.py model2' in another terminal.  I like to type 'python tc.py model1 &' and run it several times.

Install Tensorflow 1.3 / Keras on Ubuntu 16.04 with NVIDIA 1080 Ti / Titan X

Note: These directions are subject to change, but worked for me on 8/12/2017 on a Intel 4-core CPU desktop.

1. Remove NVIDIA card from computer and plug Display/HDMI cable to the connector provided by motherboard.

2. Reboot computer and get into BIOS (delete/DEL key while restarting or other key)

3. If your motherboard has Secure Boot, turn it off/disable.  Save BIOS changes and reboot.

4. Install Ubuntu 16.04.x

5. After installation, open terminal.

6. sudo add-apt-repository ppa:graphics-drivers/ppa

7. sudo apt-get update

8. sudo apt-get upgrade

9. sudo apt-get install nvidia-375 (this may not work anymore on 10/26/2017 -- you maybe should use software & updates to use additional drivers and select the 378.xx driver after step 12 and restart, but I haven't confirmed)

10. Turn off computer

11. Plug in NVIDIA card and switch Display/HDMI to graphics card connector

12. Start up computer

13. Download NVIDIA CUDA Toolkit 8.0 (use .runfile only) and NVIDIA cuDNN library from NVIDIA's website.  You have to login for cuDNN libraries.  cuDNN 5.1 for TensorFlow version < 1.3.  cuDNN 6 for version = 1.3.

14. Follow instructions on NVIDIA website to install cuda toolkit (and patch if available) but do not upgrade NVIDIA driver (you will be asked during .runfile execution) or change default install directories

15. Follow instructions on NVIDIA website to install cuDNN (i put it in my home directory in a folder called cuda)

16. Open terminal.

17. Edit ~/.bashrc and add the following lines at the end of the file:
export LD_LIBRARY_PATH=~/cuda/lib64/:/usr/local/cuda-8.0/lib64/:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-8.0/bin:$PATH

18. Logout of Ubuntu and log back in

19. Open terminal.

20. sudo apt-get install libblas-dev liblapack-dev libjpeg-dev python-dev

21a. For python 2.7:
sudo apt-get install python-pip
sudo apt-get install idle
sudo pip install --upgrade pip
sudo pip install tensorflow
sudo pip install tensorflow-gpu (if you have a GPU)
sudo pip install theano
sudo pip install keras

22b. For python 3.5:
sudo apt-get install python3-pip
sudo apt-get install idle3
sudo pip3 install --upgrade pip
sudo pip3 install tensorflow
sudo pip3 install tensorflow-gpu (if you have a GPU)
sudo pip install theano
sudo pip install keras

23. Reboot

24. Try importing tensorflow, theano, or keras in python

25. You can change keras backend by editing the ~/.keras/keras.json file

26. You can change theano settings by editing the ~/.theanorc file (or programmatically)

27. You can force TensorFlow to use CPU by executing 'export CUDA_VISIBLE_DEVICES=' in terminal

Cython Example

This is a simple Cython example I made from another website's tutorial.  I remember when I was trying to extend Python functionality with C.  It wasn't too bad, but Cython makes it a lot easier.  Here are the files for the example.

1. Install Cython: “pip install cython” from windows command line.

2. Download and install Visual C++ Compiler for Python 2.7 (https://www.microsoft.com/en-us/download/details.aspx?id=44266)

3. Open the “Visual C++ 2008 64-bit Cross Tools Command Prompt”

4. Go to the directory where “fib.pyx” and “setup.py” exist.

5. Type in “fib.bat”

6. Run the “testfib.py”

View older posts »

Search

Comments