fbpx

Python Script to Copy only Changed Files From one Dir to Another

Here is a python script that will copy only changed files from one directory to another:

				
					import os
import shutil

# source directory
src = r"C:\Users\Administrator\Downloads\fold"

# destination directory
dst = r"C:\Users\Administrator\Downloads\foldc"

# copy changed files
for item in os.listdir(src):
    s = os.path.join(src, item)
    d = os.path.join(dst, item)
    if os.path.isdir(s):
        continue
    if not os.path.exists(d) or os.stat(s).st_mtime - os.stat(d).st_mtime > 1:
        shutil.copy2(s, d)
				
			

This script will copy files from the source directory to the destination directory if they do not already exist in the destination directory, or if they have been modified more recently than the version in the destination directory. It uses the shutil module’s copy2() function to copy the files.

Note that this script will not copy directories, and it will not handle errors, such as if the destination directory does not exist or if the user does not have permission to write to the destination directory. You may want to add error handling and additional functionality to the script to handle these cases.

Script to copy both Files and Folders

				
					import os
import shutil

# source directory
src = r'C:\Users\Administrator\Downloads\fold'

# destination directory
dst = r'C:\Users\Administrator\Downloads\foldc'

# copy changed files and directories
for item in os.listdir(src):
    s = os.path.join(src, item)
    d = os.path.join(dst, item)
    if os.path.isdir(s):
        if not os.path.exists(d):
            shutil.copytree(s, d)
        else:
            for f in os.listdir(s):
                sf = os.path.join(s, f)
                df = os.path.join(d, f)
                if os.path.isdir(sf):
                    continue
                if not os.path.exists(df) or os.stat(sf).st_mtime - os.stat(df).st_mtime > 1:
                    shutil.copy2(sf, df)
    else:
        if not os.path.exists(d) or os.stat(s).st_mtime - os.stat(d).st_mtime > 1:
            shutil.copy2(s, d)

				
			

Python Script to Copy Files and Directory From one Folder to Another Folder

Here is a python script that will copy files and directories from one folder to another:

				
					import os
import shutil

# source directory
src = '/path/to/source/folder'

# destination directory
dst = '/path/to/destination/folder'

# copy files and directories
for item in os.listdir(src):
    s = os.path.join(src, item)
    d = os.path.join(dst, item)
    if os.path.isdir(s):
        shutil.copytree(s, d)
    else:
        shutil.copy2(s, d)

				
			

This script will recursively copy all files and directories from the source directory to the destination directory. It uses the shutil module’s copytree() and copy2() functions to copy directories and files, respectively.

Note that this script will not handle errors, such as if the destination directory does not exist or if the user does not have permission to write to the destination directory. You may want to add error handling to the script to handle these cases.

Migrate oracle db to MongoDB using ETL

To migrate an Oracle database to MongoDB using an ETL (extract, transform, load) tool, you would need to use the tool to extract the data from Oracle, transform it into a format that can be imported into MongoDB, and then load it into MongoDB. This process can be complex, depending on the size and structure of your database, as well as any differences between the two databases. Here are the general steps you would need to follow:

  1. Use the ETL tool to extract the data from Oracle. This will typically involve configuring the tool to connect to your Oracle database and specifying the tables and columns that you want to extract. The extracted data will be stored in a format that is compatible with the ETL tool.

  2. Use the ETL tool to transform the data into a format that can be imported into MongoDB. This may involve using built-in functions in the ETL tool to convert the data from the Oracle format to the MongoDB format, or writing custom transformations using a scripting language like JavaScript or Python.

  3. Use the ETL tool to load the transformed data into MongoDB. This will typically involve configuring the tool to connect to your MongoDB instance and specifying the collections where the data should be imported.

  4. Once the data has been imported into MongoDB, you may need to perform some additional steps to ensure that the data is properly indexed and optimized for use with MongoDB. This may involve creating indexes on your collections, as well as configuring sharding and replication as needed.

Overall, the process of migrating from Oracle to MongoDB using an ETL tool can be complex and time-consuming, so it’s important to plan carefully and test your migration thoroughly before moving to production.

Data Pipeline From Oracle to MongoDB using Python

Here is a simple example of a data pipeline that transfers data from an Oracle database to a MongoDB database using Python:

				
					# Import necessary libraries
import cx_Oracle
from pymongo import MongoClient

# Connect to the Oracle database
orcl = cx_Oracle.connect(
    user="oracle_user",
    password="oracle_password",
    dsn="oracle_host:oracle_port/oracle_service"
)

# Connect to the MongoDB database
mongodb = MongoClient("mongodb_host:mongodb_port")
db = mongodb["mongodb_database"]

# Create a cursor to perform operations on the Oracle database
orclcursor = orcl.cursor()

# Execute an Oracle query to retrieve the data
orclcursor.execute("SELECT * FROM oracle_table")

# Fetch the result of the Oracle query
oracle_data = orclcursor.fetchall()

# Loop through the result and insert each row into the MongoDB collection
for row in oracle_data:
    db.mongodb_collection.insert_one({
        "column1": row[0],
        "column2": row[1],
        "column3": row[2]
    })

# Close the cursor and database
orclcursor.close()
orcl.close()

				
			

This script uses the cx_Oracle library to connect to the Oracle database, and the pymongo library to connect to the MongoDB database. It then retrieves the data from the Oracle table and inserts it into the MongoDB collection. You can modify this script as needed for your specific databases and use case.

Data Pipeline From Oracle to PostgreSQL using Python

Here is a simple example of a data pipeline that transfers data from an Oracle database to a PostgreSQL database using Python:​

				
					# Import necessary libraries
import cx_Oracle
import psycopg2

# Connect to the Oracle database
orcl = cx_Oracle.connect(
    user="oracle_user",
    password="oracle_password",
    dsn="oracle_host:oracle_port/oracle_service"
)

# Connect to the PostgreSQL database
pgdb = psycopg2.connect(
    host="postgres_host",
    user="postgres_user",
    password="postgres_password",
    database="postgres_database"
)

# Create a cursor to perform operations on the Oracle database
orclcursor = orcl.cursor()

# Create a cursor to perform operations on the PostgreSQL database
pgcursor = pgdb.cursor()

# Execute an Oracle query to retrieve the data
orclcursor.execute("SELECT * FROM oracle_table")

# Fetch the result of the Oracle query
oracle_data = orclcursor.fetchall()

# Loop through the result and insert each row into the PostgreSQL table
for row in oracle_data:
    pgcursor.execute("INSERT INTO postgres_table VALUES (%s, %s, %s)", row)

# Commit the changes to the PostgreSQL database
pgdb.commit()

# Close the cursors and databases
orclcursor.close()
pgcursor.close()
orcl.close()
pgdb.close()

				
			

This script uses the cx_Oracle library to connect to the Oracle database, and the psycopg2 library to connect to the PostgreSQL database. It then retrieves the data from the Oracle table and inserts it into the PostgreSQL table. You can modify this script as needed for your specific databases and use case.

Data pipeline from MySQL to PostgreSQL using Python

Here is a simple example of a data pipeline that transfers data from a MySQL database to a PostgreSQL database using Python:

				
					# Import necessary libraries
import mysql.connector
import psycopg2

# Connect to the MySQL database
mydb = mysql.connector.connect(
    host="mysql_host",
    user="mysql_user",
    password="mysql_password",
    database="mysql_database"
)

# Connect to the PostgreSQL database
pgdb = psycopg2.connect(
    host="postgres_host",
    user="postgres_user",
    password="postgres_password",
    database="postgres_database"
)

# Create a cursor to perform operations on the MySQL database
mycursor = mydb.cursor()

# Create a cursor to perform operations on the PostgreSQL database
pgcursor = pgdb.cursor()

# Execute a MySQL query to retrieve the data
mycursor.execute("SELECT * FROM mysql_table")

# Fetch the result of the MySQL query
mysql_data = mycursor.fetchall()

# Loop through the result and insert each row into the PostgreSQL table
for row in mysql_data:
    pgcursor.execute("INSERT INTO postgres_table VALUES (%s, %s, %s)", row)

# Commit the changes to the PostgreSQL database
pgdb.commit()

# Close the cursors and databases
mycursor.close()
pgcursor.close()
mydb.close()
pgdb.close()

				
			

This script uses the mysql-connector-python library to connect to the MySQL database, and the psycopg2 library to connect to the PostgreSQL database. It then retrieves the data from the MySQL table and inserts it into the PostgreSQL table. You can modify this script as needed for your specific databases and use case.