How to install the Librosa library in Jetson Nano or aarch64 module

If you are working in Nvidia Jetson nano to build your deep learning TTS projects or any projects that are required Librosa library, You should have faced the issue while pip installing the librosa in Jetson nano or aarch AMD64 architecture devices.

Even, I have faced the same issue and here are the steps that I followed to successfully installed in Nvidia Jetson nano device or any aarch AMD64 architecture.

Issue: When you install librosa, you should have received the error that is not able to find the llvm library. If so, just follow the below steps.

/usr/bin$ pip3 install librosa
WARNING: The directory '/home/usr/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
WARNING: The directory '/home/usr/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting librosa
  Downloading https://files.pythonhosted.org/packages/ad/6e/0eb0de1c9c4e02df0b40e56f258eb79bd957be79b918511a184268e01720/librosa-0.7.0.tar.gz (1.6MB)
     |████████████████████████████████| 1.6MB 1.3MB/s 
Collecting audioread>=2.0.0 (from librosa)
  Downloading https://files.pythonhosted.org/packages/2e/0b/940ea7861e0e9049f09dcfd72a90c9ae55f697c17c299a323f0148f913d2/audioread-2.1.8.tar.gz
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.6/dist-packages (from librosa) (1.16.4)
Collecting scipy>=1.0.0 (from librosa)
  Downloading https://files.pythonhosted.org/packages/cb/97/361c8c6ceb3eb765371a702ea873ff2fe112fa40073e7d2b8199db8eb56e/scipy-1.3.0.tar.gz (23.6MB)
     |████████████████████████████████| 23.6MB 1.7MB/s 
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... error
    ERROR: Complete output from command /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpwycpd3kc:
    ERROR: lapack_opt_info:
    lapack_mkl_info:
    customize UnixCCompiler
      libraries mkl_rt not found in ['/usr/local/lib', '/usr/lib', '/usr/lib/aarch64-linux-gnu']
      NOT AVAILABLE
    
    openblas_lapack_info:
    customize UnixCCompiler
    customize UnixCCompiler
      libraries openblas not found in ['/usr/local/lib', '/usr/lib', '/usr/lib/aarch64-linux-gnu']
      NOT AVAILABLE
    
    openblas_clapack_info:
    customize UnixCCompiler
    customize UnixCCompiler
      libraries openblas,lapack not found in ['/usr/local/lib', '/usr/lib', '/usr/lib/aarch64-linux-gnu']
      NOT AVAILABLE

Upgrade the SETUP tools:

sudo pip3 install --upgrade setuptools
sudo pip3 install -U setuptools
sudo pip3 install cython
sudo pip3 install --upgrade pip

Install LLVM & LLVMLITE:

Before we install the LLVMLITE, LLVM-7 or above version is required. So install the llvm-7 or llvm-8 as below.

sudo apt-get install llvm-7
sudo pip3 install llvmlite 
sudo apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran

Incase if this installation throws LLVM_CONFIG not found, then create the ln for the llvm-config-7 as below. Just ensure that you have llvm-config under /usr/bin path.

/usr/bin$ sudo ln -s llvm-config-7 llvm-config

Great! Now librosa will get easily install,

sudo pip3 install librosa

Collecting librosa
Downloading https://files.pythonhosted.org/packages/ad/6e/0eb0de1c9c4e02df0b40e56f258eb79bd957be79b918511a184268e01720/librosa-0.7.0.tar.gz (1.6MB)
|████████████████████████████████| 1.6MB 1.2MB/s 
Collecting audioread>=2.0.0 (from librosa)
Downloading https://files.pythonhosted.org/packages/2e/0b/940ea7861e0e9049f09dcfd72a90c9ae55f697c17c299a323f0148f913d2/audioread-2.1.8.tar.gz
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.6/dist-packages (from librosa) (1.16.4)
Collecting scipy>=1.0.0 (from librosa)
Using cached https://files.pythonhosted.org/packages/cb/97/361c8c6ceb3eb765371a702ea873ff2fe112fa40073e7d2b8199db8eb56e/scipy-1.3.0.tar.gz
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting scikit-learn!=0.19.0,>=0.14.0 (from librosa)
Downloading https://files.pythonhosted.org/packages/57/5c/133b464c8d0be7ac8c9414b6ff2ae848808a35ce03b146fc2c43777e51f9/scikit-learn-0.21.2.tar.gz (12.2MB)
|████████████████████████████████| 12.2MB 1.5MB/s 
Collecting joblib>=0.12 (from librosa)
Downloading https://files.pythonhosted.org/packages/cd/c1/50a758e8247561e58cb87305b1e90b171b8c767b15b12a1734001f41d356/joblib-0.13.2-py2.py3-none-any.whl (278kB)
|████████████████████████████████| 286kB 1.5MB/s 
Requirement already satisfied: decorator>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from librosa) (4.4.0)
Requirement already satisfied: six>=1.3 in /usr/local/lib/python3.6/dist-packages (from librosa) (1.12.0)
Collecting resampy>=0.2.0 (from librosa)
Downloading https://files.pythonhosted.org/packages/14/b6/66a06d85474190b50aee1a6c09cdc95bb405ac47338b27e9b21409da1760/resampy-0.2.1.tar.gz (322kB)
|████████████████████████████████| 327kB 1.6MB/s 
Collecting numba>=0.38.0 (from librosa)
Downloading https://files.pythonhosted.org/packages/7e/89/853a1f03b09f1b13b59c3d785678b47daac6ddd24a285f146d09bb723b85/numba-0.45.0.tar.gz (1.8MB)
|████████████████████████████████| 1.8MB 1.8MB/s 
Collecting soundfile>=0.9.0 (from librosa)
Downloading https://files.pythonhosted.org/packages/68/64/1191352221e2ec90db7492b4bf0c04fd9d2508de67b3f39cbf093cd6bd86/SoundFile-0.10.2-py2.py3-none-any.whl
Requirement already satisfied: llvmlite>=0.29.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.0->librosa) (0.29.0)
Collecting cffi>=1.0 (from soundfile>=0.9.0->librosa)
Downloading https://files.pythonhosted.org/packages/93/1a/ab8c62b5838722f29f3daffcc8d4bd61844aa9b5f437341cc890ceee483b/cffi-1.12.3.tar.gz (456kB)
|████████████████████████████████| 460kB 1.5MB/s 
Collecting pycparser (from cffi>=1.0->soundfile>=0.9.0->librosa)
Downloading https://files.pythonhosted.org/packages/68/9e/49196946aee219aead1290e00d1e7fdeab8567783e83e1b9ab5585e6206a/pycparser-2.19.tar.gz (158kB)
|████████████████████████████████| 163kB 1.4MB/s 
Building wheels for collected packages: scipy
Building wheel for scipy (PEP 517) ... done
Created wheel for scipy: filename=scipy-1.3.0-cp36-cp36m-linux_aarch64.whl size=39879948 sha256=81dcdf1c6482e268de7e0e745c946c40e7ec271aca06a4ac46030c769300bea6
Stored in directory: /home/danata/.cache/pip/wheels/58/c7/8f/6e0f7d9c19f1b28b317b3c22e2af7131f2834db64f5828b9ee
Successfully built scipy
Building wheels for collected packages: librosa, audioread, scikit-learn, resampy, numba, cffi, pycparser
Building wheel for librosa (setup.py) ... done
Created wheel for librosa: filename=librosa-0.7.0-cp36-none-any.whl size=1599560 sha256=396b828224b4deb2d13a3e74fca92106c9f429d64b99e33551b2af8a95022cbf
Stored in directory: /home/danata/.cache/pip/wheels/49/1d/38/c8ad12fcad67569d8e730c3275be5e581bd589558484a0f881
Building wheel for audioread (setup.py) ... done
Created wheel for audioread: filename=audioread-2.1.8-cp36-none-any.whl size=26091 sha256=d4af58e8126a3b259f340fe43a9cf3701fa1650285020c432a313d2750847da8
Stored in directory: /home/danata/.cache/pip/wheels/b9/64/09/0b6417df9d8ba8bc61a7d2553c5cebd714ec169644c88fc012
Building wheel for scikit-learn (setup.py) ... done
Created wheel for scikit-learn: filename=scikit_learn-0.21.2-cp36-cp36m-linux_aarch64.whl size=16007990 sha256=fce3ea10996c5dc8c66f62cbcc0517c58d3f76a5d502cbbf81d57dd6c4e50441
Stored in directory: /home/danata/.cache/pip/wheels/58/f0/f6/27e0c1fb17f342b9aeef7b2592f1e7ed96fd2536f71a90bb16
Building wheel for resampy (setup.py) ... done
Created wheel for resampy: filename=resampy-0.2.1-cp36-none-any.whl size=321176 sha256=c9d4dd261f54e1915562e327278510f17616bb779f956d897e324613b9645787
Stored in directory: /home/danata/.cache/pip/wheels/ff/4f/ed/2e6c676c23efe5394bb40ade50662e90eb46e29b48324c5f9b
Building wheel for numba (setup.py) ... done
Created wheel for numba: filename=numba-0.45.0-cp36-cp36m-linux_aarch64.whl size=2557794 sha256=6b1ab9e8f5d4d5ff1003c7eb6ae9eb338056bf0f9424b4efc6aadd5b8685bea9
Stored in directory: /home/danata/.cache/pip/wheels/51/5d/c0/420ea2fced22bb1702a294c2cbc0dcaefd6ed61f3d6253fd61
Building wheel for cffi (setup.py) ... done
Created wheel for cffi: filename=cffi-1.12.3-cp36-cp36m-linux_aarch64.whl size=325260 sha256=0c9eba3a99fca1894dbd37a3fb04897edc42826d788cfacd19070fbd8c03743b
Stored in directory: /home/danata/.cache/pip/wheels/94/a7/7a/9782ab473d88ec2d4994a7dd2d006b1352c71da3ad34ebcaeb
Building wheel for pycparser (setup.py) ... done
Created wheel for pycparser: filename=pycparser-2.19-py2.py3-none-any.whl size=112040 sha256=04c0d1d7b118a88615851243c4ce7d026cc4fbcfc358f5daca370a0f34b96ebf
Stored in directory: /home/danata/.cache/pip/wheels/f2/9a/90/de94f8556265ddc9d9c8b271b0f63e57b26fb1d67a45564511
Successfully built librosa audioread scikit-learn resampy numba cffi pycparser
Installing collected packages: audioread, scipy, joblib, scikit-learn, numba, resampy, pycparser, cffi, soundfile, librosa
Successfully installed audioread-2.1.8 cffi-1.12.3 joblib-0.13.2 librosa-0.7.0 numba-0.45.0 pycparser-2.19 resampy-0.2.1 scikit-learn-0.21.2 scipy-1.3.0 soundfile-0.10.2

Hope, it helps you. Now go ahead and do all your wonderful deep learning projects that related to sounds. All the best.

Advertisements

Most liked python sources

Python Programming:

Python is becoming a very interesting programming language or an important language that take part in any IT field. Moreover, Python can be used for any field say web development (Django, flask), Data analysis (Pandas, Numpy), Data Science (Scipy) and voice assistance (Flask-ask, Flask-assistant), etc. So it is solely in our choice whether which module I can use for my requirements. In Quora, there are so many questions raised related to Python the beginners. In fact, some people who are not even into IT tech field wants to learn Python just because it is hot topics in Data Science. Now many people pursuing Python, there would be different requirements for which people directly download the code from the internet just to fulfill their requirements but at the same time, it is mandatory to ensure code ethics followed.

PIP Install:

Though we have many modules & codes available on git-hub & internet, we need to be very sure on what kind of module that I am going to choose and whether those are trusted a source. In order to control that, the Python community has long back introduced the Python repository tool PyPi which has defined the certain level of code conducts & format that has to be followed in order to commit any famous codes on this Python package index (PyPi). So it is always best practice and secure to install/import any Python module using pip install which directly downloads the recent stable version from PyPI repository.

Python code in Git-Hub:

In case, if none of the pre-installed modules found in the PyPi repository to meet your requirements and the special code found in Git-hub open source repositories then we can go ahead and clone it for our use. However, before integrated those module/ codes in our code, it is good to validate the module or review the code to ensure no leakage identified in that code like unnecessary https routing or network connection. Also, choose the code if it has proper README with the detailed walkthrough about code, how many issues discussed, review the branch segregation, the number of contributors, how many have the fork. Further, you can also review about the code publisher in twitter & LinkedIn.

Follow the Python:

Some developers keep writing the python code while another just touch python whenever required. It does not matter either I am Python programmer or intermittently writing code in Python. if we love Python and wants to write the Python code in Pythonic way then there are plenty of sources through which we can connect and feed ourselves more information, news, tricks & techniques about Python.

I am not the coding more, however, I used to follow certain podcast, people & site to keep up myself updated with Pythons. Here are some of the source that I follow personally and it’s really benefitted when I do simple code now and then.

https://talkpython.fm/: This website is specifically about Python PODCAST, as the PODCAST really deliver the technical and useful level of modules which you can use in our Python code. The podcast available in iPhone podcast store, google play store & SoundCloud. This is a weekly podcast and almost runs for an hour but it is very interesting when you continuously follow the discussions. I encourage you to follow this to find out the special modules that are really useful. The owner of this site also delivering Python well designed paid courses for beginners, intermediate to advance level.

https://pythonbytes.fm/: This is another podcast that gives more developer news to follow the Python.

Python Bytes: This is about Python 100 days challenges, If you are really free from projects or enjoying your vocation buts wants to write some python code, then you can participate and start writing some code to challenge yourself.

Python Community: In this Python communities, you can and be active to clarify your questions and raise support.

[amazon_link asins=’1775093301′ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’d78d0cd0-2457-11e8-bca7-e3e739a2b168′]

Real Python: In this site, you can register to get Python tricks on a daily basis. You can also find free courses and video tutorial delivered by the owner of this site. The famous book Python Trick has the complete guide with tricks that you can write/use in your code.

PyData: If you writing Python for data analysis, then you can follow this community and learn more about pandas & other special modules that specifically used for Data analysis.

Full Stack Python: This site deliver complete useful free contents of Python for learning with regards to web development, data analysis, etc.

Also, you can join your local region meetup group and participate to discuss Python and a list of project that is delivered in Python.

Hope the materials are really useful for you to grow yourself in Python to achieve your respective streams.

Below are some of the books that I have bought and very useful to me as I am daily reading to feed myself more into Python coding.

Alternative for CX_Oracle with Subprocess in Python

Operations team or production support team, do self-kick start by automating their day-to-day tasks as nobody happy to do monkey job. In order to automate the stuff, many support guys choose dynamic interpreter language like Bash, Python, Perl, etc which can do their repeated task wisely without manual intervention and by doing so, the productive of the team also drastically increase.

[amazon_link asins=’151738043X’ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’686ac999-dccb-11e7-99d6-4b4eeedda8f4′]

There are some bottlenecks faced when we do automation as most of the Unix servers prebuild with some dynamic scripting languages Perl & Python with minimal standard libraries installed or windows servers with batch & PowerShell. When we convert our own requirements into dynamic code, there might be external libraries needed and certain libraries are really easy to install from central repository tools (Eg: Perl from CPAN & Python from PyPI), while some other libraries would require additional setup tools.

Connect to DB:

Our most of the application in banking industries, use Oracle as the backend database for OLTP. As support guy, I have to execute some SQL queries by connecting SQLPLUS tool to prepare the daily transactions stats reports. The manpower required to execute these queries needs at least 1 hour and it is error-prone also. So I have spent extra one hour to automate this stuff.

Understanding requirements:

Since Python is our chosen language to automate our operation pieces of stuff, I directly tried to get python CX_Oracle ODM library (object data model) to be installed on our production server but that was not that simple as I was thinking. The installation process required admin privileges and along with that, it’s required some setup tools as well. So as per our internal process, I raised the ticket to Unix admin with necessary approvals but again our admin also not able to install since it was required additional libraries to install, moreover installing the libraries directly on production may lead impact.

[amazon_link asins=’B008TVLH0E’ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’8a74c708-dccb-11e7-9b1d-4d7512b0d34c’]

Now it is my call whether to involve development team who are not pythonist or leave the stuff be always manual, but then I was thinking about my original requirements which were to execute the select statement and did not involve any DML or DDL statements.

So this can be achieved with simple SQLPLUS command as it is generally used in a bash script. But my concern is to have pure python code rather having another module in a bash script, at least to have platform independent as Unix & Windows OS level. To achieve this requirement, I have written simple SQL_CONNECT python class using SQLPLUS command in subprocess library and will discuss its code style here.

Security concern:

Since we are going to use SQLPLUS, it is obvious that the user & password is visible to all. So with DBA assistance DB role can be created with specific tables &  columns be privileged to the respective user with only select permission.

Libraries to import:

import subprocess
import types
import re

The SQLPLUS is an external command which can be invoked and communicated using subprocess, types module to check whether the input parameters that passed are my expected types and re module to filter out the ORA errors.

Wrapping the code with class base:

class sql_connect():
        def __init__(self,ora_userid,ora_passwd,ora_sid,ora_proj,ora_role=None):
                self.ora_user=ora_userid
                self.ora_pwd=ora_passwd
                self.ora_sid=ora_sid
                self.ora_proj=ora_proj
                if type(ora_role) == types.TupleType:
                        self.role_name,self.role_pwd=ora_role
                else:
                        self.role_name = None

It is always good practice to use OOPS (object-oriented programming concept) which increase our level of coding knowledge as well as code reusabilities. Python is pure OOPs language, so the class concept in python has all the standard mechanism such as inheritance, derived class, annotation &  override, etc.,

Init method is to give visibility to our class variables and self is to protect the variable behavior to be exposed to outside unless until called with an object reference.

In the above variable declaration part, ora_userid,ora_passwd,ora_sid are the specific parameter for SQLPLUS command and ora_proj to be passed if shared DB and ora_role to get input as tuple format. The ora_role is validated with if condition and leveraged to be None also if the role is not set on your DB side.

Connection method using subprocess:

def conn(self,ora_query):
                connt_sid = " %s/%s@%s "% (self.ora_user,self.ora_pwd,self.ora_sid)
                error = re.compile('(ORA-)d+')
                sql_conn = subprocess.Popen(['sqlplus','-S',connt_sid], stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
                sql_conn.stdin.write(self.ora_proj)
                if self.role_name:
                        sql_conn.stdin.write('n set role '+self.role_name+' identified by '+self.role_pwd+';')

                sql_conn.stdin.write('n whenever sqlerror exit 2;')
                sql_conn.stdin.write('n set feedback off;')
                sql_conn.stdin.write('n set head off;')
                sql_conn.stdin.write('n set pages 0;')
                sql_conn.stdin.write("n set null '0';")
                sql_conn.stdin.write("n set colsep '|';")
                sql_conn.stdin.write('n set lines 1000;')
                sql_conn.stdin.write("n select 'PYTHONSTRINGSEPSTARTSHERE' from dual;")
                sql_conn.stdin.write("n "+ora_query+";")
                out,err=sql_conn.communicate('n exit;')
                if sql_conn.returncode == 0 and not error.search(out):
                        return out.split("PYTHONSTRINGSEPSTARTSHERE ")[-1]    ---  To split the output.
                else:
                        return None

[amazon_link asins=’B010BEYA8M’ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’a7e9ab3c-dccb-11e7-91be-5b8353c6bb18′]

The connection to DB made using SQLPLUS through subprocess library and perform a primary set operation to make my SQPLUS command line output be clear to execute the query that passed through variable ora_query.

The regular expression (re) module precompile with ‘(ORA-)d+’ to check ORA- followed by expected digits as oracle standard error format. 

Here the subprocess should be enabled with STDIN, STDOUT & STDERR through PIPE to ensure two-way communication to SQLPLUS command. The SQL_CONN is the variable to carry the connection session, so write a method to pass the query and communicate method to retrieve the output or error.

The subprocess has another useful method called return code which ensures the called command passed the output with proper exit status and make my life be confidence.

In the above class, I have totally suppressed the errors and get only the query output but you can try changing this script by adding meaning to your errors.

Hope this article would have enlightened you to work more on your automation and finding the alternatives to continue your journey of coding. This might be simple class but it would definitely either encourage you to write code or write articles.

Regular expression in Python

Regular expression when there is the irregular format. The regular expression is one of the important modules to effectively handle the strings with patterns.

Regular Expression:

[amazon_link asins=’0596528124′ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’9fe5acd0-ca3b-11e7-8920-43a0fc7bd80b’]

The regular expression is to search a particular pattern from a large amount of string. In another term, regular expression leverages us to format the string from strings.

Initially, the regular expressions were very well handled with Unix commands “sed” (stream editor) and widely used for many operations like string replacement, file editing, extract the file from one particular line to another particular line, etc.

vinoth@vinoth:~$ echo "Men is intelligent" | sed s/Men/Women/
Women is intelligent   -- String 'Men' substitute with string 'Womoen' for manupulation.

Then awk as text processing command added to some flavors of Unix. The awk works very well for text processing and there are certain applications has been developed using completely with AWK.

vinoth@vinoth:~$ echo "Awk is very good language, Yes" | awk '$1 ~ /'wk'/ {print $NF}'
Yes  --Check first word has letter "wk", so print last letter

Interesting about Regular Expression:

The regular expression is something as like sweet for the operation or production support agent who does scriptings. I have started my career as production support agent and my main job responsibility was to monitor the transaction logs and raise an alert if there are any anonyms identified in the log.

Logs are having the running information about Application / server / services in either structured format or unstructured format.

For eg: Web server logs (Catalina.out log) stores the information of whatever the developer wrote in SOP (system.out.println or system.err) during the Java application development. After this particular application deployed in production server, the production/application support team to identify the patterns of the possible error in the server/application logs to configure that as monitoring parameter for continuous application monitoring.

Identifying the error/exception patterns in the logs using scripting language is a very interesting job. Below I have the team viewer application log, in which I wanted keep monitor with pattern “error 3 digit number”.

2017/11/08 22:38:38.130 32725 4116704064 S CTcpProcessConnector::CloseConnection(): Shutdown socket returned error 107: Transport endpoint is not connected
2017/11/08 22:38:38.601 32725 4125096768 S TVRouterClock Schedule next request in 0 seconds
2017/11/08 22:38:38.640 32725 4125096768 S! KeepAliveSessionOutgoing::ConnectEndedHandler(): KeepAliveConnection with server50705.teamviewer.com ended

As we can see in the above 2 line of log, “error 107″ represent the transport endpoint connect issue. So the word error would be commonly followed by rest of 3 digits to represent any other errors. In this case, I wanted to monitor my team view application log as “error” space “3 digits”, so I just tried to use grep command with pattern error and got the output as below.

less Teamview.log | grep 'error'

Oops, the above screen I can see both “No error” and “Error patterns” but my requirements is to extract only the pattern that has word “error” followed by space with 3 digit number.

less Teamview.log.log | grep 'error [0-9][0-9][0-9]' -- I force the grep command 
to search the word error with 3 digit number which can be between 0-9

There are still the better ways to achieve this kind of requirements but it is out of scope in this article to explain.

By using well known grep command, I have got the error pattern which I can configure in any of monitoring tools like Nagios, Splunk for further monitoring and alert.

 Regular Expression makes the Perl:

[amazon_link asins=’B007S291SA’ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’dbbbab8f-ca3b-11e7-8509-d92b2b14d32b’]

Perl is the general purpose programming language which developed for the main purpose of text manipulation driven by the interest of sed & awk. The making history of this programming language encouraged me to learn all scripting languages, as it is developed by Mr.Larry wall in 1987. There are many programming languages developed and created but the history of Perl development and the man behind the Perl programming language is very much interesting. I encourage you to read that blog and also about the Mr.Larry wall.

Power of Perl regular expression:

I would like to run through one more example which can be solved in Unix command as well as Perl command but you can feel the power of Perl execution during this example,

Requirement: I have a variable which has the sentence “This is regular expression example”. In this sentence, I wanted to check whether the word regular is found. If it is found then I wanted to print “Regular expression found”

Unix Style:

To achieve this requirement in Unix style, I have used AWK and have got below,

In the above output screen, I have forced to pass the column number of the sentence as $3 meaning Column $1 refer the word “This” and column $2 refer the word “is” and column $3 refer the expected word “regular” and so on. Maybe we have got our requirement done but what if regular word not in the column $3, in that case, this command will get fail. However, we can do more work with awk for loop to achieve this requirement in AWK command which can be bit lengthy.

Perl Style: 

The same requirement has been achieved in very simple dynamic regular expression with pure Perl way as below,

I have declared the sentence in variable $a and that has been directly passed in if condition with regular expression symbol of =~ (equal to any string) and then printed another sentence.

Regular Expression of Python:

Before python version 1.5, regex was the module used in python to work with the regular expression. During that version, many python developers started integrating Perl in Python for playing with regular expression scenario. Hence, python re module introduced for provides Perl-style regular expression patterns in Python.

Below are the submodules of re,

['DEBUG', 'DOTALL', 'I', 'IGNORECASE', 'L', 'LOCALE', 'M', 'MULTILINE', 'S', 'Scanner', 'T', 'TEMPLATE', 'U', 'UNICODE', 'VERBOSE', 'X', '_MAXCACHE', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__version__', '_alphanum', '_cache', '_cache_repl', '_compile', '_compile_repl', '_expand', '_locale', '_pattern_type', '_pickle', '_subx', 'compile', 'copy_reg', 'error', 'escape', 'findall', 'finditer', 'match', 'purge', 'search', 'split', 'sre_compile', 'sre_parse', 'sub', 'subn', 'sys', 'template']

There are common symbol, character & syntax explained very well in this site as common standard format expressed in all scripting/programming languages.

Eg:

“.”  –> Dot refers the matches any character except newline ‘n’

>>> sent = "This is an example in learninone.com"  -- variable assignment 
>>> import re  --re modules imported
>>> reg=re.search('exam...',sent)   -- Searching for exam after that three character by representing ...
>>> reg.group()  -- group to get only the word that i wanted to search.
'example'
>>> reg.string  -- Print whole string
'This is an example in learninone.com'

With the same above example, if I represent 4 dots, the regular expression will not fetch the word example as I have introduced newline character, the dot “.” cannot represent the newline ‘n’.

>>> sent = "This is an examplen in learninone.com"   --Introduced n (newline) character
>>> reg=re.search('exam....',sent) -- search for exma.... in sent variable
>>> reg.group()                     -- As you can see the None type returned
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'group'
>>> reg.string
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'string'

You can refer the python re module has been well explained with more example. So I would like to give some real-time example without more ado,

Often used example,

Mostly very often the admin team has to fetch the list of used IPs from all the files and folders on the server. As you all know, the IP format is 255.255.255.255. This can be achieved simply using well-known modules os, socket,& re

os: To walk through all the files & folders in the server using os.walk() module.

Re: To extract the IP which explained below.

Socket: To validate whether the fetched values by regular expression is really an IP

>>> line = line = 'This is simple regular expression example to extract the IP 255.255.255.255'
>>> ips = re.findall(r'(?:[d]{1,3}).(?:[d]{1,3}).(?:[d]{1,3}).(?:[d]{1,3})$', line)
>>> print ips
['255.255.255.255']

In the above example,

(?:[d]{1,3}). :- This portion of regular expression extracts any digit (d) found in a sequence of up to 3 digits {1,3} that escape with a dot “.” as the ?: symbol force the end of the pattern to be matched. The same portion followed as next 4 portions of retrieving the complete IP address (255.255.255.255) and $ refers the end of the line.

The method re.findall performs this regular expression pattern scanned from left to right on the string.

>>> line = line = 'This is simple regular expression example to extract the IP 255.255.255.255 and 
... there is another dummy ip as 101.21.122.20 final'
>>> ips = re.findall(r'(?:[d]{1,3}).(?:[d]{1,3}).(?:[d]{1,3}).(?:[d]{1,3})', line)
>>> print ips
['255.255.255.255', '101.21.122.20']

The same example used to include two IPs and If you would be noticed the argument to the findall method, I have removed the “$” end of line character as the sentence in the above line not end with IP. Moreover, since we have two IPs in the sentence the findall method produce the output as a list structure.

Other than findall method, there are so many methods or python regular expression objects and match objects such as search, match, finditer, split, sub, subn, flags, groups, groupindex, pattern, expand, etc., are really easy your work when you play with regular expression.

I would encourage you to go through links that captured in the article, as it will help you to practice the regular expression in python and even in other languages.

Payment Card check digit using Python

We have learned to code simple crontab using python and I believe that should have encouraged you to try your own idea as Python code. In this article, I have written small Python function to perform payment card check digit verification.

Before we start discussing the python function, Let me give you some high-level information about the card check digit project.

PCI Overview:

[amazon_link asins=’B0062O7K2Y’ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’70e8771d-bbbf-11e7-91f3-37c66b469002′]

In payment card industries (PCI), the Card number is important/secured information to store or handle in the applications. As you all know that, we should not disclose our credit/debit/prepaid card information to any of third parties for any reason unless until it is solely for making payments. In the same importance or governance will be required for the organization that supports or provide banking operation on behalf of banks, Financial institution and global payment &technology providers.

In simple term, these all are coming under PCI compliance certificate for merchants of all sizes, financial institutions, point-of-sale vendors, and hardware and software developers who create and operate the global infrastructure for processing payments. So every bank/FI can provide their project to the vendors who are PCI certified in order to make a trust that PAI (personal account information) of cardholder/customer will be handled in a secure manner.

Further, in order to obtain this PCI certificate, some organization engage their internal information security team to review the systems, logs, applications & microservices to ensure the PAI information not leaked. This verification will happen for the certain period of time or on every software releases.

Now, In order to verify whether the clear card numbers are either stored on some unsafe location or logs or even in the emails. There should be an auto scan scheduled on servers to scan the card numbers and throw the notification if anything found.

Project discussion:

[amazon_link asins=’B007X7M9Q6′ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’b8bb69f1-bbc9-11e7-a4d0-55464f8d1b5f’]

The payment card number that we see as 16 digit number in our debit/credit/prepaid cards are generated using Luhn algorithm or also known as “modulus 10” algorithm. We will discuss the algorithm while we do code below.

The idea is to have simple UI to get the user input card number and return whether it is card number or not. I am going to use Tkinter python module and more of play with python list data structure.

Simple Tkinter program:

First, we need to import the module for our project,

from Tkinter import *  --I wanted to import all the submodule of Tkinter.

Next, we need to define our UI Tkinter label

w = Tk()  -- Creating Tkinter object
Label(w, text="Enter 16 digit number:").pack()  --defining our label with pack

With the “w” Tkinter object, I have formed the simple label with text and wrapped that label with a pack() method to geometrically position my text whether to be displayed on the top or on the side.

eg:

Label(w, text="Your Expression:").pack() --defualt is top

 

Label(w, text="Your Expression:").pack(side = "left")

After the label creation, we should define the Entry() widgets for our Tkinter object “W” and assign to entry variable.

[amazon_link asins=’1884777813′ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’6e2e0369-bd97-11e7-8933-13c95c524023′]

Now the entry variable is bind with our return statement of function card_check_digit which is the core of this project. This entry variable has object reference of pack() method which can be called to combine our text variable to pass it as the parameter to our bound function card_check_digit.

entry = Entry(w)
entry.bind("<Return>", card_check_digit)
entry.pack()

Simple Check digit Function:

Define our own function name card_check_digit with parameter as card number to perform check digit. We are going to declare the odd & even list, get the card number, slice each digit of card number as an int.

def card_check_digit(card):  --":" to start our function
 card_even=[] 
 card_odd=[] 
 card_number = entry.get() 
 card_list_var = list(card_number) --Converting the string to list 
 card_list = map(lambda x:int(x),card_list_var) --Convert the values from string to int in list

Here, I wanted to use the list mutable object to play around with payment card number. The values that received from the Tkinter button entry.get(), is converted into list data structure as shown above. Then the list object passed into python map function in order fetch each value of the list to convert an int value of card_list.

Map() function is used to apply any lambda/user-defined/pre-defined function on each and every value of sequence type data structures such as list, tuple, set,etc.

In order to perform our Luhn approach, we have ignored the last digit of the given card number by slicing the card_list values having only 0-15 position into the card_to_test list.

card_to_test = card_list[0:15]

Next step is to reverse the list values.

card_to_test.reverse()

Next, I wanted to insert ‘0’ in the 0th position of the list in order to segregate the even & odd digits for further odd digits to multiply with 2 as per 4th step of Luhn algorithm.

card_to_test.insert(0,0) -- Insert 0 on 0th position in the List.
map(lambda x,y:card_even.append(int(x)) if y % 2 == 0 else card_odd.append(int(x)),card_to_test, range(len(card_to_test)))
card_odd_2 = map(lambda x: int(x)*2,card_odd)

again Map() is the wonderful function to shorten your code and achieve more. As you can see above, the card_to_test list values have been passed as “X” and range(card_to_test) list values passed as “Y” to find the odd & even position using mod 2 formula.

range: Range is the python inbuilt function which creates integer list.

 >>> range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> type(range(10))      --Create the list
<type 'list'>
>>> type(range(10)[1])   --ensure the integer values in the list.
<type 'int'>

[amazon_link asins=’B071CGL5PY’ template=’ProductAd’ store=’learninone1-20′ marketplace=’US’ link_id=’cdb94533-be0e-11e7-8171-fbe73d545fcb’]

Lambda: Lambda is used to define your own anonyms function. The concept of Lambda has used widely across the technologies also like Amazon Lambda (serverless service). But it is well used in conjunction with typical functional concepts like,filter() map() and.reduce()Using which you can have your anonyms function defined and applied on the values. As you seen in the above code, Lambda declared with two variables X & Y followed by “:” action of what should be done with logic, as here my logic is to perform two actions either add values to card_even list or add values to card_odd based on the IF condition y % 2 (position of card_to_test is odd or even) and act accordingly. This iteration will happen until the Map() function get values from both card_to_test & range(card_to_test) list.

After I got my even & odd digits as list card_even & card_odd. One more Mapper iteration on card_odd to multiply the values with 2 as per Luhn algorithm. The further step is to subtract with 9 if any number goes more than 9, to sum all the numbers and arrive at a final value.

card_over_nine = map(lambda x: x if x < 9 else (x - 9), card_odd_2)  --Same way as it is descibed in previous line of code.
tot_odd = sum(card_over_nine)
tot_even = sum(map(lambda x: int(x),card_even))
final = tot_odd+tot_even

As you can see above, the card_over_nine list created based on mapper performed on card_odd_2 with the simple logic of values in card_odd_2 greater than 9, if so do subtraction. with the final value, we have to do verification of whether the last digit match with formula 10 – final % 10.

last_val = 10 - final % 10

We are now final value to match with the last digit of card number which we ignored in the first step. So with matching criteria, the IF logic performed and response passed to Tkinter label which then packed with pack() function to show on the Tkinter small UI as below,

if last_val==card_last: 
  res.configure(text = "You card is valid for payment") 
else: 
  res.configure(text = "This is not payment card number")

The control flow forwarded to the main loop of Tkinter for continues monitoring the user input.

res = Label(w)
res.pack()
w.mainloop()

Output Session:

Let’s test this code first with some random 16 digit number and actual payment card (I have to mask my card as I cannot publish with full digit),

Error Thrown for the Wrong card
Card Number Validated

 

 

 

 

Hope this walkthrough about card_check_digit gives some high level of understanding about how the online payments systems are making basic validation of card number and technical details of python List, Map(), Lambda, range & little bit Tkinter modules. If you required further understanding about the code that discussed in this forum, you can comment over here and we will chat.

Simple Cron-tab using python

In Unix/Linux world, crontab is very useful tool to schedule any job/script to trigger on particular time. There are so many tools out there in the market as scheduling tools that follow the concepts of crontab.

So I would start running you through writing simple crontab application using python, that can promote you developing your own simple application as the start in python language.

Before we start writing crontab application in python, I wanted to give you requirements that implemented in crontab.

Cron is a system daemon used to execute desired tasks (in the background) at designated times. 

A crontab file is a simple text file containing a list of commands meant to be run at specified times. It is edited using the crontab command. The commands in the crontab file (and their run times) are checked by the cron daemon, which executes them in the system background.

Each user (including root) has a crontab file. The cron daemon checks a user’s crontab file regardless of whether the user is actually logged into the system or not and trigger the respective script/jobs based on the scheduled time.

How to schedule the job using crontab? to schedule any job using crontab, first, we need to edit the crontab table file by executing the command as below,

vinoth@LAPTOP-U4G2071G:~$ crontab -e

which will get you to the table file where you have to mention time in the format of minute (0-59), hour (0-23, 0 = midnight), day (1-31), month (1-12), weekday (0-6, 0 = Sunday)  followed by the script/job with complete path that you wanted to call.

Having this in my mind, I have written simple python code as my first learning steps for python programming language.

The crontab is available under https://github.com/Vino-git/Learninone/blob/master/Cron-Job.py

Now let’s walk through on the code.

import time

The requirements for our program is scheduler. So obviously, time module is important. Hence I have imported time modules.

def cron_time(minu, hour, day_of_month, month_of_year, day_of_week, year):

In any language, writing your piece of code as a function would always meaningful. Refer https://docs.python.org/2/tutorial/controlflow.html#defining-functions

If you would have noticed, the syntax “:” refers the definition of a new block. When I start writing the code as per requirements, I need to pass 6 parameters same as we pass to crontab file such as minute, hour, day_of_month, month_of_year, day_of_week, year for the program to refer and return a true boolean value.

try:
-------------- set of codes--------
---------------
except:

During lexical analysis happen in my code, there are chances that it may get aborted due to some kind of runtime error.

Note: In python, when we execute the code. The python compiler performs translation of source code into bytes code. During this translation, the compiler performs syntax analysis, lexical analysis, etc.

In order to catch that runtime error and proceed my code to end on the smooth way, I use to try & except keyword. When the python parser, read the keyword “Try:” it will go and check whether “except:” present in my code block. Further, the parser carefully walks through the code and if any error found in the code then it will directly go to except block and do whatever we ask the parser to do. Say,  I ask the parser to continue or exit based on the severity of error that raised in the try block.

As next steps, I get the present time of my system time as below.

>>> localtime = time.localtime()
>>> print localtime
time.struct_time(tm_year=2017, tm_mon=10, tm_mday=16, tm_hour=22, tm_min=23, tm_sec=33, tm_wday=0, tm_yday=289, tm_isdst=0)

Here the time module has thrown me the output of current time details from seconds till year with daylight saving indicator. The daylight saving will have non-zero values if my system set with DSL.

Next, I simply do nested if conditions by referring localtime.tm_min is greater or equal to given minutes in the parameter and I follow the same concept till a year. If the given parameters are lesser than the current system minutes, hour, day, month, week & year then the program will return the boolean value “True” else return “false”

In the above example, we are doing the scheduler verification but we need the daemon to keep running this cron_time. In order to create daemon process using python, I have used the scheduler, threading & time modules.

As usual, to write daemon program we need to import above-said modules,

import sched   --Scheduler to schedule the job/script
import time  --
import os
from threading import Timer

s=sched.scheduler(time.time,time.sleep)

Scheduler in the submodule of sched for which we can pass the objects of time.time (to get current seconds) and time.sleep (to set the intervals to invoke for scheduler). Please go through this link https://docs.python.org/2/library/sched.html?highlight=sched

while True:
 time_next=10
 if cron_time(30,'*','*','*','*','*'):
   s.enter(time_next,1,"job/script")
 time.sleep(5)

I have created an indefinite1 loop which validates the cron_time trigger and sleeps for 5 seconds.

Now let’s see how the cron_time working on my window computer.

>>> while True: 
      time_next=1 
      if cron_time('*','*','*','*','*','*'): 
        s.enter(time_next,1,simpl_print(),()) 
      time.sleep(59)
Wow, I have got cron like scheduler on my windows
Wow, I have got cron like scheduler on my windows
Wow, I have got cron like scheduler on my windows
  1. I have executed the cron_time function in my python IDLE console.
  2. Created simple function called simpl_print()
  3. imported necessary modules for scheduling my task and called my simpl_print() function using the scheduler to print every minute.

Hope you should have some understanding or confident about how python code can be written. I encourage you to practice with your own small requirements as a project.

Directory operations using Python

As we have seen in the past articles, about how python extensively used on files and parsing those files using varieties of objects & modules to perform an action. Python has modules to perform certain operations on directories also.

Though each operating system has its own directory structure, Python has the special module called os.path which is almost suitable for all the operating system’s path naming conventions.

>>> from os import path
>>>
>>> dir(path)
['__all__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '_joinrealpath', '_unicode', '_uvarprog', '_varprog', 'abspath', 'altsep', 'basename', 'commonprefix', 'curdir', 'defpath', 'devnull', 'dirname', 'exists', 'expanduser', 'expandvars', 'extsep', 'genericpath', 'getatime', 'getctime', 'getmtime', 'getsize', 'isabs', 'isdir', 'isfile', 'islink', 'ismount', 'join', 'lexists', 'normcase', 'normpath', 'os', 'pardir', 'pathsep', 'realpath', 'relpath', 'samefile', 'sameopenfile', 'samestat', 'sep', 'split', 'splitdrive', 'splitext', 'stat', 'supports_unicode_filenames', 'sys', 'walk', 'warnings']

To perform the directory operation, first, we have to import os module which has the submodules to play around in the directory.

while we see the example of directory operation, I will try to differentiate between Linux & Windows environment.

The first command we type or do in any operating system to know my present working directory.

os.getwcd()  — to know the current working directory.

Unix:
>>> import os
>>> os.getcwd()
'/home/vinoth'

Windows:
>>> import os
>>> os.getcwd()
'C:\Python27amd64'

Further, we would like to know the file & folders that present in my current working directory.

os.listdir(path) — To list the files/directories under my path.

Unix:
>>> os.listdir('/home/vinoth')
['.bash_history', '.bash_logout', '.bashrc', '.cache', '.git', '.gitconfig', '.ipynb_checkpoints', '.ipython', '.jupyter', '.lesshst', '.local', '.pip', '.profile', '.ssh', '.viminfo', '.w3m', 'Envs', 'README.md', 'Untitled.ipynb', 'Untitled1.ipynb', 'sample.py', 'sample.pyc', 'venv']

Windows:
>>> os.listdir('C:\Python27amd64')
['design', 'design.egg-info', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'NEWS.txt', 'PKG-INFO', 'python.exe', 'pythoncom27.dll', 'pythoncomloader27.dll', 'pythonw.exe', 'pywintypes27.dll', 'qt.conf', 'README.rst', 'README.txt', 'Scripts', 'setup.cfg', 'setup.py', 'tcl', 'Tools']

Next, to know whether I have access to any of the files/folders inside in my directory.

os.access(path, mode) — To get the access on the particular path with the mode to know what operation I can perform.

Unix:
>>> os.access('sample.py',os.R_OK)
True

Windows:
>>> os.access('README.rst',os.R_OK)
True

The R_OK is the mode which is the object of os module. Below are the lists of mode that we can use as part of os.access.

os.F_OK
Value to pass as the mode parameter of access() to test the existence of a path.
os.R_OK
Value to include in the mode parameter of access() to test the readability of path.
os.W_OK
Value to include in the mode parameter of access() to test the writability of path.
os.X_OK
Value to include in the mode parameter of access() to determine if a path can be executed.

Now I wanted to enter into some of the directories that are inside my current working directory.

Unix:
>>> os.chdir('Envs')
>>> os.getcwd()
'/home/vinoth/Envs'

Windows:
>>> os.chdir('Doc')
>>> os.getcwd()
'C:\Python27amd64\Doc'

Also, we can perform below operation on the directory such as,

  • os.chroot – Change the root directory of the current process path.
  • os.chmod(path, mode) – From your python application, you can change the mode of the directory.
  • os.chown(path, uid, gid) – To change the owner of the path.
  • os.link(source, link_name) – To create a link on your directory.

os.walk(directory, followlink=False/True):

Directory operations not limited in python, as we are facilitated to do in out python programming to traverse on any directory either it is from the root or leaves the level.

>>> import os
>>> os.listdir('.')
['.bash_history', '.bash_logout', '.bashrc', '.cache', '.git', '.gitconfig', '.ipynb_checkpoints', '.ipython', '.jupyter', '.lesshst', '.local', '.pip', '.profile', '.ssh', '.viminfo', '.w3m', 'Envs', 'README.md', 'Untitled.ipynb', 'Untitled1.ipynb', 'sample.py', 'sample.pyc', 'venv']
>>>
>>> os.walk('.')
<generator object walk at 0x7f43309e1a50>  --The os.walk produce the object which is iteratable.
>>> os.walk('.').next()  --It generate three tuples to iterate.
('.', ['.cache', '.git', '.ipynb_checkpoints', '.ipython', '.jupyter', '.local', '.pip', '.ssh', '.w3m', 'Envs', 'venv'], ['.bash_history', '.bash_logout', '.bashrc', '.gitconfig', '.lesshst', '.profile', '.viminfo', 'README.md', 'Untitled.ipynb', 'Untitled1.ipynb', 'sample.py', 'sample.pyc'])

Some more effective way to use os.walk in our code to play around the directories & files.

>>> for directory,path,name in os.walk('.'):
... print directory,path,name,'/n'
...
...
. ['.cache', '.git', '.ipynb_checkpoints', '.ipython', '.jupyter', '.local', '.pip', '.ssh', '.w3m', 'Envs', 'venv'] ['.bash_history', '.bash_logout', '.bashrc', '.gitconfig', '.lesshst', '.profile', '.python_history', '.viminfo', 'README.md', 'Untitled.ipynb', 'Untitled1.ipynb', 'sample.py', 'sample.pyc'] /n
./.cache ['pip'] [] /n
./.cache/pip ['http', 'wheels'] [] /n
./.cache/pip/http ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'] [] /n

As we can see in the above example, os.walk perform recursive travel across the directories and subdirectory. It produces three tuples such as directory,sub-directory & Filename. By using for loop on the os.walk object, we can iterate on each folder and file to do any kind of directory or file parsing.

Note: with option followlinks=True, os.walk() will consider traverse on link directory.

There is additional os.scandir(‘.’) which do high-performance directory operation and yield directory like an object for further action. But this module available starting python 3.5 version.

os.path:

Another extensive module which helps to play around the path names.

vinoth@LAPTOP-U4G2071G:~$ pwd
/home/vinoth
vinoth@LAPTOP-U4G2071G:~$
vinoth@LAPTOP-U4G2071G:~$
vinoth@LAPTOP-U4G2071G:~$ python
Python 2.7.6 (default, Oct 26 2016, 20:30:19)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from os import path
>>> path.abspath('/home/vinoth')
'/home/vinoth'
>>> path.abspath('/home/vinoth/sample.py') --this return the absolute path of the given path as parameter.
'/home/vinoth/sample.py'
>>> path.basename('/home/vinoth/sample.py') --provide the last name in the path
'sample.py'
>>> path.commonprefix('/home/vinoth/sample.py') -- This will take list (directory list) as parameter to provide the common path.
''
>>> path.commonprefix(['/home/vinoth/sample.py','/home/vinoth/sample.pyc','/home/vinoth/Untitled.ipynb'])
'/home/vinoth/'
>>> path.dirname('/home/vinoth/sample.py') --provide only the directory name
'/home/vinoth'

With the example above, you might have confused with path.abspath, which produces an absolute path of the given path. You will understand the purpose of abspath() when you perform the coding with directory operation on the pathname.

With os.path module we can perform more path related check as most of the module under os.path return booleans output,

>>> os.path.exists('/home/vinoth') -- To check whether the given folder is present in my path
True
>>>
>>> os.path.exists('/home/vinothd')
False

In Unix command, we use find command to get the recent change, modified, access time but in python that is also very useful to perform, so you have powerful module when do python scripting to perform any kind of automation for any repetitive task.

>>> os.path.getatime('/home/vinoth')
1506192316.3023088
>>> os.path.getmtime('/home/vinoth')
1507611152.3911762
>>> os.path.getctime('/home/vinoth')
1507611152.3911762

The above modules are returned the times in seconds, but not to worry as we have another module time which can be used to convert the time to human readable form as below.

>>> import time --Have to import the module time to access its submodule.
>>> time.ctime(os.path.getctime('/home/vinoth'))
'Tue Oct 10 08:52:32 2017'
>>> time.ctime(os.path.getmtime('/home/vinoth'))
'Tue Oct 10 08:52:32 2017'
>>> time.ctime(os.path.getatime('/home/vinoth'))
'Sat Sep 23 22:45:16 2017'

 Time module features are not limited with ctime (convert time) as we have more option to explore which we can discuss in upcoming articles as it is out of scope in this article.

There are some more module to perform with if conditions, for example, I have a requirement in my program to access only the path that starts with ‘/’ in Unix & ” in windows.

>>> os.path.isabs('/home/vinoth') --Returns True since my path start with "/"
True
>>> os.path.isabs('home/vinoth') --Returns False since my path start with "/"
False
>>> if os.path.isabs('/home/vinoth'):
... print 'I have full path to do list directory'
... listdir('home/vinoth')
... else:
... print 'Its not absolute path, so do join path'
... os.path.join('.','home/vinoth')
...
I have full path to do list directory
Traceback (most recent call last):
 File "<stdin>", line 3, in <module>
NameError: name 'listdir' is not defined
>>> if os.path.isabs('/home/vinoth'):
... print 'I have full path to do list directory'
... os.listdir('/home/vinoth')
... else:
... print 'Its not absolute path, so do join path'
... os.path.join('.','home/vinoth')
...
I have full path to do list directory
['.bash_history', '.bash_logout', '.bashrc', '.cache', '.git', '.gitconfig', '.ipynb_checkpoints', '.ipython', '.jupyter', '.lesshst', '.local', '.pip', '.profile', '.python_history', '.ssh', '.viminfo', '.w3m', 'Envs', 'README.md', 'Untitled.ipynb', 'Untitled1.ipynb', 'sample.py', 'sample.pyc', 'venv']
>>> if os.path.isabs('home/vinoth'):
... print 'I have full path to do list directory'
... os.listdir('/home/vinoth')
... else:
... print 'Its not absolute path, so do join path'
... os.path.join('.','home/vinoth')  --os.path.join is helps to connect path & file respectively.
...
Its not absolute path, so do join path
'./home/vinoth'

Still, we have some more method to do boolean like operations such as,

>>> os.path.isfile('README.md') --Return True if given parameter is file.
True
>>> os.path.isfile('.ipython')
False
>>> os.path.isdir('.ipython')  --Return True if given parameter is directory.
True
>>> os.path.isdir('README.md')
False
>>> os.path.ismount('.')   --Return True if given parameter is from mount point.
False
>>> os.path.ismount('/home')
True
>>> os.path.islink('/home/vinoth/Envs/')  --Return True if given parameter is unix link path.
False
>>> os.path.samefile('/home/vinoth/sample.py','/home/vinoth/Test/sample.py') --Though the filename is same, since the path is different it return False
False
>>> os.path.samefile('/home/vinoth/sample.py','/home/vinoth/sample.py') --Return True if i am refereing same path in different input parameter in my code
True

There are still more when it comes to talking about python operation on files & directories. These modules are really useful for admin guys who do automation on their day to day works or do some automated deployment, os patch upgrade, logs back-up, provisioning folder structure, etc.