Making csv from txt files

I have a lot of txt files like this: Title 1 Text 1(more then 1 line) And I would like to make one csv file from all of them that it will look like this: Title 1,Text 1 Title 2,Text 2 Title 3,Text 3 etc How could I do it? I think that awk is good for it but don't know how to realize it.

Csv Batch to find file, copy file to another directory

My question is very similar to the question posted here on stackoverflow but the difference for me is that I need the batch script to do the following Look at a csv file which has the name only without the image extension and find the image with that name in a particular directory Then take that file and copy it to another directory What modifications would I need to do to this batch script to accomplish that task? @echo off for /f "tokens=1,2 delims=," %%j in (project.csv) do ( copy

Set fields as missing when importing from CSV

My CSV looks like: field_name, format1, format2, format3 bank_acct, /\d{8}/,, sort_code, /\d{2}-\d{2}-\d{2}/,, bank_name, string,, credit_card, /\d{16}/,, customer_id, /\s{2}\d{11}/, /\s{1}\d{12}/, /\d{12}/ ... How can I set the fields to missing where they don't have a second and third format?

CSV Search AutoIT

I have a CSV file that contains 4 columns, I want to search column 2 and change the corresponding data in column 4 using AutoIT: col 1 col 2 col 3 col 4 1 502 shop 25.00 2 106 house 50.00 3 307 boat 15.00

OleDbException - "Record is too large" while reading a csv

I have a csv file, that has many columns. I am trying to read in a few of the columns using and OleDbDataAdapter, that has worked fine on many large files, albeit with not as many columns. string connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + Path.GetDirectoryName(_tempFilePath) + "; Extended Properties='text;HDR=YES;';"; using (OleDbConnection conn = new OleDbConnection(connectionString)) { OleDbCommand cmd = new OleDbCommand(String.Format(se

Csv How check date 20140211 in php?

When the import csv with the field as '20150331', we will import 2015/03/31 00:00:00 as its expire date. When the import csv omit the field we will import current time + 365 days as the expire date?

pyqt4 QTableView in QMainWindow with csv input and headers

I am working with a QMainWindow and adding a QTableView widget. The table is to be filled with data from a csv file. The csv file first row has the headers, but I cannot find how to write that row into the headers. Even inputting a test header list does not work. Also I want to reverse sort on the "time" column. Here is code restricted to mostly the table: import sys import csv from PyQt4 import QtGui from PyQt4.QtCore import * from array import * class UserWindow(QtGui.QMainWindow):

Csv Redshift: COPY returns successful, but no data in table

I have a table with about 20 columns that I want to copy into redshift with from an S3 bucket as a csv. I run a copy command that runs successfully, but it returns "0 lines loaded". I've been stumped on this for a while and I'd appreciate any help. I can share the table schema and a portion of the csv, if necessary (though, I'd like to avoid it if possible) Any idea why this would be?

Csv QlikView - Loading specific files from remote server

I'm trying to solve this problem for a long time, but now I have to ask for your help. I have one QVD file on my local PC named e.g. server001_CPU.qvd and on remote servers I have shared folder with many files of many types. There are also files named server001_CPU_YYYYMMDD.csv (e.g. server001_CPU_20140806.csv) that are generated every day and that have same structure as local qvd file. They have column DATE. What I need is (in loading script) to check last DATE in local file and load remote fi

Csv unable to coerce '2012/11/11' to a formatted date (long)

I am new to Cassandra cql (cqlsh 4.1.1, Cassandra 2.0.8.39, CQL spec 3.1.1, Thrift protocol 19.39.0) - using the cql COPY command to a table from a CSV formatted file and I get the following error: Bad Request: unable to coerce '2012/11/11' to a formatted date (long). How do I change a column using cql so that it accepts the date from my CSV file?

neo4j: Cypher LOAD CSV with uuid

I am starting to work with LOAD CSV of Cypher for Neo4J to import larger csv-files into my DB. I would like to add to each imported node a unique ID (uuid) as a property. My try was: LOAD CSV FROM "file:..." AS csvLine CREATE (c:Customer { uuid: {uuid}, name: csvLine[0], code: csvLine[1]}) Unfortunately I receive for each node the same UUID (although its a function that would normally generate the UUID new when called), it looks like the UUID is generated 1 time and then attached to each nod

Csv Pentaho Dimension lookup/update

I have seen Dimension Lookup/Update documentation here and a few other blogs. But I cannot seem to get a clear idea. I have a table with the following structure: Key Name Code Status IN Out Active The key name code status active comes from a csv file . I need to use Dimension lookup/update step for scd type2 and populate the IN/Out. After setting up the connection details, I have set the Keys to KEY and the Fields to all the other fields with the option Date of last insert (without strea

Add column to .CSV file with python

I really need some help. We collect engine data which comes in a compressed file with filename like data_XXXXXX.csv.gz. Compressed these files are about 50KB, decompressed they go up to about 3,5MB. They contain about 7000 lines of data where each line has about 240 values, seperated by ";". A few lines of data looks like this: 2015-04-04 03:03:21;DIG. Engine 1;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;1;1;1;1;1;1;1;1;1;1;1;1;1;1;1;1;0;0;0;0;1;1;1;1;1;1;1;1;1;1;1;1;1;1;1;1;0;0;0;0;0;0;0;0;0;0;0

d3 - load two specific columns from csv file

I am trying to plot circles on a map based on data from csv files. I want the latitude and longitude from the csv file and plot a circle. I am unable to long the two fields. I get an object undefined error. Here's my code so far: Here's the link for the CSV file - http://slate-interactives-prod.elasticbeanstalk.com/gun-deaths/getCSV.php <html> <head> <meta charset="utf-8"> <link href="d3-geomap/css/d3.geomap.css" rel="stylesheet">

Csv How to change Column Delimiter in MS VSTS for web performance test

I am using Microsoft VSTS for Performance test a web application I am adding a Data Pool (.csv file) for parameterize multiple values, But the problem is .csv file is showing it in column delimited type like: VariableA,VariableB,Variable3 Test1,Test2,Test3 Test4,Test5,Test6 But i want these multiple values in single column, Because whenever we will select the column delimited type, .csv file automatically converts all values in different columns. Like in HP-LoadRunner we have 3 options [Col

Won't display IP address in CSV

I am trying to retrieve the running applications, the computers' username and its IP address. Now, every time that the results are saved on the text file, the IP address part would always give me this result: "Length" "11" Is there any way to get the IP address? $savepath = "C:\Users\$([Environment]::UserName)\Desktop\apps\runningapps.txt" Get-Process | where {$_.mainwindowtitle.length -ne 0} | select name, mainwindowtitle| ConvertTo-Csv -NoType | Set-Content $savepath Get-WmiObjec

Csv Extract date from a string using Talend

I am receiving CSV files daily which have header like "AD PERFORMANCE REPORT (Jan 24, 2016)". I would like to extract date from it and use it as date column using Talend. How can I do that?

Powershell Export to CSV with three coumns

function getServerInfo { $serverList = Get-Content -Path "C:\Users\username\Desktop\list.txt" $cred = Get-Credential -Credential "username" foreach($server in $serverList) { $osVersion = gwmi win32_operatingSystem -ComputerName $server -ErrorAction SilentlyContinue if($osVersion -eq $null) { $osVersion = "cannot find osversion" } $psv = Invoke-Command -ComputerName $server -ScriptBlock {$PSVersionTable.PSVersion.Major} -ErrorAction Ignore if($psv -eq $null)

Python 3 read csv and keep most recent duplicates

I have a csv file that I am trying to remove rows with duplicate email address from. If the email address is a duplicate I want to keep the row with the highest ID. id email _website _store confirmation 11 test@abc.com base default 1 12 test2@abc.com base default 1 13 test@abc.com base default 1 I have been able to print out a list of the duplicates with the scrip below, but I need to write to a csv with the most recent ID. for row in csv_f: if row[1] not in se

Integrating CSV person data on Google Maps through Bluemix?

I am working on a project with a large CSV file that containes the location and movement of users. I would like to place this on a custom map in Google Maps via bluemix and use Bluemix Services to explore the data. The primary goals are: Getting the CSV data on the custom Google map. When running, the data should progress in time and show the movement of users. Making the CSV points cluster for UX. (so that points that are near each other would stack together) My primary question is how

Editing a .csv file with a batch file

this is my first question on here. I work as a meteorologist and have some coding experience, though it is far from professionally taught. Basically what I have is a .csv file from a weather station that is giving me data that is too detailed. (65.66 degrees and similar values) What I want to do is automate a way via a script file that would access the .csv file and get rid of values that were too detailed. (Take a temp from 65.66 to 66 (rounding up for anything above .5 and down for below) or f

Compare two csv files and append the new column with awk

I have two csv files file1. csv example - column 15 -value - 0812710304015 which is column 10 in file2 M|xxxxxxx|xxxxxx|xxxxxx|xxxxxxxxx|H-SYD-AUAUD-003658-00000013-160606221243|123466789123456|1806||8800|MC-TCSSGK-0812710304015 0000001#M|182137|||0812710304015|04080010194MORTIER/VINCENT MR Fee Transaction 0812710304015 QF TCSSGK 15022602300045 0000000088000000000000000000000000000009010200MC50TCSSGKMORTIER/VINC

Putting CSV Data as is into a New CSV

I have a CSV file with two columns; Identitiy, and User. the User column contains UserPrincipalNames, and the Identity column contains a name. What I'm trying to do is to take the Userprincipalnames and get the Displayname from them, which I am able to do. What I can't figure out it how to get each rows Identity (which is already in the csv I imported) to be displayed alongside the newly found Displayname. I'm not using the Identity column to get anything, I just want to display the values aga

Csv Heroku error when adding an Environment Variable that is a Comma Separated Value with Spaces

Here is a variable that is defined in my local .env file in my app. I created it to be a comma separated value, like so: STATE_KEYWORDS=georgia,new york,new jersey,maine,vermont,florida In my seed.rb file, I call on that STATE_KEYWORDS variable by using the "fetch" and "split" methods to turn it into an array, because I need that attribute ("keywords") to be an array: Category.create(name: "U.S. States", keywords: ENV.fetch("STATES_KEYWORDS").split(",")) This works fine when I run my

Trouble opening CSV file in QGIS with Delimited Text plugin

I am trying to open a CSV file in QGIS using the Delimited Text plugin. I’m working with Windows 8 and QGIS 2.18.1. My CSV file has around 4.000 points to be plotted, but when I try to open it from QGIS it only shows the first 20 lines. Those 20 lines are perfect but I have no idea why the rest of my csv wont open. Look at this picture of the Delimited Text window to check if im doing everything right. Also, here is a part of my csv file, showing the last line read by the QGIS (I dont kno

Csv jMeter BeanShell postprocessor synchronization

I have some performance tests in jMeter and in one HTTP request I have BeanShell PostProcessor. It should write user email address at the top of CSV file (newest on top). Its very important, that this file is sorted. import org.apache.commons.io.FilenameUtils; import org.apache.jmeter.services.FileServer; import java.text.*; import java.io.*; import java.util.*; try { String email = vars.get("emailPrefix") + "+" + vars.get("environment") + "-jm-" + vars.get("randomEmailNumber")+"@someEmail

Csv Read only first few rows or header in sqlContext

As mentioned https://github.com/databricks/spark-csv, I am also reading csv from pyspark.sql import SQLContext sqlContext = SQLContext(sc) df = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('cars.csv') Is there any option to read only the header or only first few rows. Basically I just want to check if a particular column is present in the dataframe or not?

Csv How may i apply Export to Text functionality in rails

Using Rails 4,i want to apply import and export functionality,when i click on EXPORT TO TEXT link all my table data which is present in table is downloaded to text. Same functionality i have applied for excel but now i want it for text.So i don't have any idea on how may i do it and what i may write in controller. This is my previous code which i have followed from railcast. def import_machine_attendance @emp = Employee.all respond_to do |format| format.html format.csv {

server gets down while copying data from csv to cassandra

I want to copy data from csv file (more than 10 million records and file size is 4 gb) to cassandra. for that I have used COPY command as below, COPY table_name (list_of_columns) FROM 'FilePath' WITH HEADER = true; It is loading data to table, but after loading some data (400-500k records) the server gets down. This might be because for huge file. What could be the issue? Also how can I copy the remaining data only instead of truncating the existing data and start from the beginning?

Without rowkey in csv file can we upload bulk data into Hbase file using command line or any other way?

I have 1 corer records of csv file but not having rowkey or auto-increment id and I want to upload that file into hbase table. But I can not do with hbase command. I have used following command for it hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator="," -Dimporttsv.columns=cf:mobile test1 hdfs://master.ambari.com:8020/user/root/www/dnd_20170705_0.csv but I am getting following error. ERROR: Must specify exactly one column as HBASE_ROW_KEY Do you have any idea about

F# CSVProvider only reports first column data

While using CSVprovider on any csv file, I used the below: http://spatialkeydocs.s3.amazonaws.com/FL_insurance_sample.csv.zip type statsProvider = CsvProvider<"../../FL_insurance_sample.csv",","> let stats = statsProvider.Load("../../FL_insurance_sample.csv") let firstRow = stats.Rows |> Seq.head The CSVProvider only returns the data from the first column. It does identify the columns (18) correctly and the name of the columns correctly, yet when you look at the type

Csv Failed to import 3 rows: ParseError - Invalid row length 1 should be 3, given up without retries

I am trying to import csv file with delimiter=’|’. I am getting this repeated error, I am struggling like this for two days. Any help will be appreciated. Below are details. Cassandra Version: [cqlsh 5.0.1 | Cassandra 3.0.9 | CQL spec 3.4.0 | Native protocol v4] This is my csv: row_nr|PRD_ID|X_01 1|3170428144631014|25603.24 2|3170428144632015|25606.24 4|3170428144633017|25602.24 Created Keyspace: create keyspace newpqp with replication = {'class:''simplestrategy', 'replication_factor':1}

How do I use Spark Dataframes to export rows from C* to CSV files

I need to periodically archive/cold store rows form C* tables to CSV. For example: export Jan-Jun 2016's rows in C* table my_table to a CSV my_table.2016_06-30.csv, export Jul-Dec 2016's rows in my_table to my_table.2016-12-31.csv, and so on. I considered CQL to do this, but not all my tables have timestamp columns for my rows. It has been suggested that I use Spark Dataframes to do this (so I can, get to metadata like writeTime available from the Spark Cassandra Connector). I'm new to the S

How to sort a CSV file using Windows command line?

I have a file like this: 9007,5001,800085,,100,40.00,,,,,20170923,,,8157,60400,,,,,,,5001,,,,,51815720718 9007,5001,9995,,100,40.00,,,,,20170930,,,8157,60400,,,,,,,5001,,,,,51815720718 9007,5001,35787654,,370,2.00,,,,,20170923,,,8157,60405,,,,,,,5001,,,,,51815720718 2001,5001,557,,370,4.25,,,,,20170930,,,8157,60405,,,,,,,5001,,,,,51815720718 9007,5001,657,,704,3.75,,4,,,20170930,,,8157,60400,,,,,,,5001,,,,,51815720718 And I have to sort the file like this: 2001,5001,557,,370,4.25,,,,,2017093

Extracting matching lines from a CSV

I have a file that looks like this: 64fe12c7-b50c-4f63-b292-99f4ed74e5aa, ip, 1.2.3.4, 64fe12c7-b50c-4f63-b292-99f4ed74e5aa, ip, 4.5.6.7, bacd8a9d-807f-4ae9-95d2-f7cc17222cab, ip, 0.0.0.0/0, silly string bacd8a9d-807f-4ae9-95d2-f7cc17222cab, ip, 0.0.0.0/0, crazy town db86d211-0b09-4a8f-b222-a21a54ad2f9c, ip, 8.9.0.1, wild wood db86d211-0b09-4a8f-b222-a21a54ad2f9c, ip, 0.0.0.0/0, wacky tabacky 611f8cf5-f6f2-4f3a-ad24-12245652a7bd, ip, 0.0.0.0/0, cuckoo cachoo I would like to extract a list o

how to import csv file if it both includes delimiters / and ,

I have a file with mixed delimiters , and /. When I import it into SAS with the following data step: data SASDATA.Publications ; infile 'R:/Lipeng_Wang/PATSTAT/Publications.csv' DLM = ',' DSD missover lrecl = 32767 firstobs = 3 ; input pat_publn_id :29. publn_auth :$29. publn_nr :$29. publn_nr_original :$29. publn_kind :$29. appln_id :29. publn_date :YYMMDD10. publn_lg :$29. publn_first_gr

CSV import to Cloud SQL from Cloud Storage using Cloud Shell

I have a CSV file on a Cloud Storage instance (bd_storage) and need to make an import to an already created table (matriculas) in a Cloud SQL database (test). The thing is that the UI import option by default use fields separated by comma (',') and my CSV file is semicolon separated (';'). I know I could use a text editor to change all the commas to semicolons, but the CSV file it's too big for my PC to do it (that's the reason I'm using Google Cloud Platform). How can I use the Cloud Shell to

Handling "\" values CSV serde

CSV serde unable to escape the values contains "\" in spark dataframe. Reading CSV file data using spark CSV serde but it is unable to escape the "\" character. spark.read.option("multiLine","true").option("delimiter",'|').csv("/data/working/dev01/textfile/") Actual Result _c1| _c2 |c3| ----+--------------+ 10 |"viv"|"1"|10 | 10 |"viv"|"1"|10 | 10 |"viv"|"1"|10 | 10 |"viv"|"1"|10 | ----+--------------+ Expected Result "10"|"viv\"|"1"|"10"| "10"|"viv\"|"1"|"10"| "10"|"

Exclude a specific series from exporting to csv

I'm using HighCharts library to plot some data in gauge chart. My chart looks like the image below. To achieve this plot, I'm using solid gauge and gauge together using series option, (solid gauge for the semicircular and gauge for the dial.) ... series: [ { name: 'solidgauge', type: 'solidgauge', data: [data.value], ... }, { name: 'gauge', type: 'gauge', data: [data.value], ... }, ] ... Obviously the data for both series is identical, so when I expo

What parameter use on "Data Time Format" of Export CSV function on Grafana to show the Unix Timestamp?

I have an influxdb database and I´m using it on Grafana to see the data. I can export the data showed on the graph using the Export to CSV function on Grafana but the timestamp is converted to YYYY-MM-DDTHH:mm:ssZ. I would like to get the time in Unix Timestamp but I don´t know how to format the field Data Time Format on Grafana to use the Unix Timestamp. What should I put on the Data Time Format to get the UNIX time instead of UTC time? This are the parameters for the time field: YYYY-MM-DD

Apache NiFi: Mapping a csv with multiple columns to create new rows

I found a similar question on stack overflow. This approach worked fine with just a couple of columns But I realised this method is not possible for csv's with a large number of Columns. I have a csv with 75 columns. I decided to follow this approach (Same link as mentioned above). As asked to do in that question. I added the UpdateRecord processor and added the CSVReader and CSVWriter. Then as told I entered my SchemaText. Which was pretty long as it required me to define the entire 70 columns

CSV separator for Karate

Can we use a different separator for CSV files in Karate API? I am trying to build a test data file including comma as a parameter value. In Karate if i have a comma then the data after a comma is considered as a separate value. I tried to substitute a pipe symbol instead of comma and it did not work. Sample File looks as below: "Param,eterA"| "Param,eterB" Is there an alternate option?

Not uploading CSV file in vue.js

I have started with vue and D3. I just want to show my csv data in the console but when I try to do it with D3 CSV function is not working at all. Appears an array of 16 HTML elements which are in index.html, would you mind helping me with this? This is my project structure: This is my component code: <template> </template> <script> import * as d3 from "d3"; import { csv } from 'd3'; export default { name: 'mycomponent', data() { return{

Csv groupby with spark java

i can read data from csv with spark, but i don't know how to groupBy with specific array. I want to groupBy 'Name'. This is my code : public class readspark { public static void main(String[] args) { final ObjectMapper om = new ObjectMapper(); System.setProperty("hadoop.home.dir", "D:\\Task\\winutils-master\\hadoop-3.0.0"); SparkConf conf = new SparkConf() .setMaster("local[3]") .setAppName("Read Spark CSV")

AWK: How to merge CSV files and eliminate rows that contain certain values?

I have hundreds of CSV files. Each CSV file is similar to this: | KEYWORD | NUMBER OF COMPS | AVGE M E (K) | GS/M | EST. A SE/M | C CORE | |---------|-----------------|--------------|------|-------------|--------| | Apples | 311 | 12 | N/A | <100 | 10 | | Bananas | >1,200 | 737 | N/A | 490 | 88 | | Oranges | 48 | 184 | N/A | N/A | 1 | | Fruits | 161 | 94 | N/A | -

how can we create a block size in jmeter with csv config? Each thread should pickup specific set of values

how can we create a block size in JMeter with CSV config? I have 5 multiple users and one Bulkuser.csv file with 4 columns, The file has around 2000 values. I wish to create a block of 400 values for my 5threads[users]. 1st USER WILL USE 1st – 400 VALUES (Values in ROW 1-400) 2nd USER WILL USE NEXT 5 VALUES (Values in ROW 401-800) and so on.. How can we implement this? is there a beanshell pre-processor script for each data read and decide to read the specific file as per thread number?

How can I use read-csv-file to read from a string instead?

The 2htdp/batch-io library contains the useful read-csv-file procedure for reading a CSV file into a list. It takes a filename as its argument. Unfortunately, it does not take a string containing CSV as its argument. Suppose I have a CSV in a string variable and I want to use read-csv-file to parse it. Is there a way to avoid saving the CSV to a file just to be able to parse the CSV? The documentation says: reads the standard input device (until closed) or the content of file f and produces it

Writing CSV files - fill columns with whitespace or not?

When doing various data analysis, it often makes sense to save some intermediate results as a CSV file. Could be for documentation, or to hand over to colleagues that want to work with Excel or similar or to have a quick way to do a sanity check yourself. But how do I best format such a CSV file? Let's assume I want to have a classic spreadsheet with a header row and the data in columns. Like so: Device_id;Location;Mean_reading;Error_count opti-1;Upper-Underburg Backroad 2;1.45;42 ac-4;Valley 23

  1    2   3   4   5   6  ... 下一页 最后一页 共 25 页