
Please mimic the coding style/conventions used in this repo. Run python linter and nose tests manage lint
Gnucash ofx install#
Pip install -r dev-requirements.txt Examples Keep transactions for which function returns trueĬsv2ofx comes with a built in task manager manage.py. The last row to process (zero based, negative values count from the end) 01/05/09) as the day (ignored if parse_fmt is present) Interpret the first value in ambiguous dates (e.g. Transaction type (either debit or credit)ĭoes the csv file contain split (double entry) transactions The mapping function will take in a record, e.g., Required field attributes attribute The mapping object consists of a dictionary whose keys are OFX/QIF attributes and whose values are functions which should return the corresponding value from a record (csv row). New mappings must be placed in the csv2ofx/mappings folder (otherwise you must use the ). If you would like to import csv files with field names different from the default, you can modify the mapping file or create your own. Use yoodlee settings csv2ofx -m yoodlee file.csv Specify date range from one year ago to yesterday with qif output csv2ofx -s '-1 year' -e yesterday -q file.csv Read input from stdin cat file.csv | csv2ofx Print output to stdout csv2ofx ~/Downloads/transactions.csv d, -debug display the options and arguments passed to the parser OFX server date (default: source file mtime ) o, -overwrite overwrite destination file if it exists q, -qif enables 'QIF' output instead of 'OFX' L, -list-mappings list the available mappings Number of initial cols to skip (default: 0 ) The final rows to process, negative values count from the end (default: inf ) Number of initial rows to skip (default: 0 ) Number of rows to process at a time (default: 2 ** 14 ) 01/05/09 ) as the dayįield used to combine transactions within a split for double entry statements y, -dayfirst interpret the first value in ambiguous dates (e.g. h, -help show this help message and exitĭefault account type 'CHECKING' for OFX and 'Bank' for QIF. Source the source csv file (defaults to stdin )ĭest the output file (defaults to stdout )

chain () for line in IterStringIO ( content ): print ( line ) CLI Examplesĭescription: csv2ofx converts a csv file to ofx and qif gen_trxns ( groups ) cleaned_trxns = qif. Normal QIF usage import itertools as it from tabutils.io import read_csv, IterStringIO from csv2ofx import utils from csv2ofx.qif import QIF from import mapping qif = QIF ( mapping ) records = read_csv ( 'path/to/file.csv', has_header = True ) groups = qif. chain () for line in IterStringIO ( content ): print ( line ) gen_trxns ( groups ) cleaned_trxns = ofx. Normal OFX usage import itertools as it from meza.io import read_csv, IterStringIO from csv2ofx import utils from csv2ofx.ofx import OFX from import mapping ofx = OFX ( mapping ) records = read_csv ( 'path/to/file.csv', has_header = True ) groups = ofx. INSTALLATIONĬsv2ofx is intended to be used either directly from Python or from the command line. RequirementsĬsv2ofx has been tested and is known to work on Python 3.7, 3.8, and 3.9 and PyP圓.7. csv2ofx has built in support for importing csv files from mint, yoodlee, and xero. You will do the community a great service.Csv2ofx is a Python library and command line interface program that converts CSV files to OFX and QIF files for importing into GnuCash or similar financial accounting programs. If you find your institution's data please add it to OFX Home. Read the "Finding Institutional Data" post to learn more. If you cannot find your institution at OFX Home then visit the forum.From there you can enter its data into your finance program, leave a comment or report an error. You will be brought to a page containing the institution data. If you find your institution click on its name.

A list of matching institutions will appear.Alternatively follow the directory link beneath the search box to view all institutions in alphabetical order.Enter the name of your institution in the search box on the right of this page and press "Search".This ensures that our information is always up to date.

If one of our users finds missing or incomplete information they can offer additional data.

If one our users finds an error they can submit a correction.
