Google cloud dataflow Dataflow eclipse example generates rateLimitExceeded error

Following the instructions for "Developing Dataflow Pipelines with the Cloud Dataflow Plugin for Eclipse" ... When I run the code generated by the plugin, I get this error WARNING: There were problems getting current job messages: 429 Too Many Requests { "code" : 429, "errors" : [ { "domain" : "global", "message" : "Request throttled due to project QPS limit being reached.", "reason" : "rateLimitExceeded" } ], "message" : "Request throttled due to project QPS limit being re

Google cloud dataflow How to send messages from Google Dataflow (Apache Beam) on the Flink runner to Kafka

I’m trying to write a proof-of-concept which takes messages from Kafka, transforms them using Beam on Flink, then pushes the results onto a different Kafka topic. I’ve used the KafkaWindowedWordCountExample as a starting point, and that’s doing the first part of what I want to do, but it outputs to text files as opposed to Kafka. FlinkKafkaProducer08 looks promising, but I can’t figure out how to plug it into the pipeline. I was thinking that it would be wrapped with an UnboundedFlinkSink, or s

Google cloud dataflow Google Dataflow: java.lang.IllegalArgumentException: Cannot setCoder(null)

I am trying to build a custom sink for unzipping files. Having this simple code: public static class ZipIO{ public static class Sink extends com.google.cloud.dataflow.sdk.io.Sink<String> { private static final long serialVersionUID = -7414200726778377175L; private final String unzipTarget; public Sink withDestinationPath(String s){ if(s!=""){ return new Sink(s); } else { throw new IllegalArgumentException("must a

Google cloud dataflow Sliding Windows Starting Point

I'm trying to calculate some sliding average for a bounded dataset, which have dates attached to it as well as some value. Based on the docs from: https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/transforms/windowing/SlidingWindows and https://cloud.google.com/dataflow/model/windowing#sliding-time-windows First I am emitting the datestamp with outputWithTimestamp, dividing the timestamps into: Window.into( SlidingWindows .of(Duration.standardDays(3

Google cloud dataflow Project "Dataflow: Readonly Artifacts (DO NOT DELETE)" visible in my Developers Console

Is it correct that I can see the project "Dataflow: Readonly Artifacts (DO NOT DELETE)" in my developers console? Ever since I got alpha access to CDF last month, it has been visible. I also noticed that I got charged on the project even before we started to test and run jobs (which was literally just a few hours ago!). My understanding is that CDF is free in alpha, but that you must pay for any services used e.g. BigQuery, GCS etc. However, I would expect those charges to show up in my other

Google cloud dataflow Dataflow performance issues

I'm aware that an update was made to the CDF service a few weeks ago (default worker type & attached PD were changed), and it was made clear that it would make batch jobs slower. However, the performance of our jobs has degraded beyond the point of them actually fulfilling our business needs. For example, for one of our jobs in particular: it reads ~2.7 million rows from a table in BigQuery, has 6 side inputs (BQ tables), does some simple String transformations, and finally writes multiple

Google cloud dataflow Bigtable bulkload using Dataflow is too slow

What is the best way to do bulk load to Bigtable for patterns like 20GB data files every 3 hrs? Is Dataflow right way for this? Our issue with bulk loading Bigtable using Dataflow is.. Looks like Dataflow QPS is not matching QPS of Bigtable (of 5 nodes). I am trying to load 20GB file(s) to bigtable using Dataflow. It is taking 4 hrs to ingest into bigtable. Also I keep getting this warning during the run.. { "code" : 429, "errors" : [ { "domain" : "global", "message" : "Request t

Google cloud dataflow Dataflow fails with java.lang.NoSuchMethodError: io.grpc.protobuf.ProtoUtils.marshaller(Lcom/google/protobuf/Message;)

I'm trying to get a Dataflow job to run on Google Cloud. It always fails with: java.lang.NoSuchMethodError: io.grpc.protobuf.ProtoUtils.marshaller(Lcom/google/protobuf/Message;)Lio/grpc/MethodDescriptor$Marshaller It's a maven project, here are my dependencies: <dependencies> <dependency> <groupId>com.google.cloud.dataflow</groupId> <artifactId>google-cloud-dataflow-java-sdk-all</artifactId> <version>1.8.0</version> </depende

Google cloud dataflow Unknown producer for value SingletonPCollectionView

In the interest of providing a minimal example of my problem, I'm trying to implement a simple Beam job that takes in a String as a side input and applies it to a PCollection which is read from a csv file in Cloud Storage. The result is then output to a .txt file in Cloud Storage. So far, I have tried: Experimenting with PipelineResult.waitUntilFinish (as in (p.run().waitUntilFinish()), altering the placement of the two p.run() commands, and simplifying as much as possible by just using a strin

Google cloud dataflow Cloud Dataflow batch taking hours to join two PCollections on a common key

I am running a Dataflow batch job to join two PCollections on a common key. The two PCollections have millions of rows each: one is 8 miilion rows and the other is 2 milliosn rows. My job will complete by taking more than 4 hours! So I have checked SO posts on related topics as following: Dataflow Batch Job Stuck in GroupByKey.create() Complex join with google dataflow How to combine multiple PCollections together and give it as input to a ParDo function But did not find any insghts on how t

Google cloud dataflow "GC overhead limit exceeded" for long running streaming dataflow job

Running my streaming dataflow job for a longer period of time tends to end up in a "GC overhead limit exceeded" error which brings the job to a halt. How can I best proceed to debug this? java.lang.OutOfMemoryError: GC overhead limit exceeded at com.google.cloud.dataflow.worker.repackaged.com.google.common.collect.HashBasedTable.create (HashBasedTable.java:76) at com.google.cloud.dataflow.worker.WindmillTimerInternals.<init> (WindmillTimerInternals.java:53) at com.google.cloud

Google cloud dataflow Apache Beam and avro : Create a dataflow pipeline without schema

I am building a dataflow pipeline with Apache beam. Below is the pseudo code: PCollection<GenericRecord> rows = pipeline.apply("Read Json from PubSub", <some reader>) .apply("Convert Json to pojo", ParDo.of(new JsonToPojo())) .apply("Convert pojo to GenericRecord", ParDo.of(new PojoToGenericRecord())) .setCoder(AvroCoder.of(GenericRecord.class, schema)); I am trying to get rid of setting the coder in the pipeline as schema won't be known at pipeline creation time (it w

Google cloud dataflow Can Google dataflow GroupByKey handle hot keys?

Input is PCollection<KV<String,String>> I have to write files by the key and each line as value of the KV group. In order to group based on Key, I have 2 options : 1. GroupByKey --> PCollection<KV<String, Iterable<String>>> 2. Combine.perKey.withhotKeyFanout --> PCollection where value String is accumulated Strings from all pairs. (Combine.CombineFn<String, List<String>, CustomStringObJ>) I can have a millon records per key.The collection of keyed-d

Google cloud dataflow How to Activate Dataflow Shuffle Service through gcloud CLI

I am trying to activate the Dataflow Shuffle [DS] through gcloud command line interface. I am using this command: gcloud dataflow jobs run ${JOB_NAME_STANDARD} \ --project=${PROJECT_ID} \ --region=us-east1 \ --service-account-email=${SERVICE_ACCOUNT} \ --gcs-location=${TEMPLATE_PATH}/template \ --staging-location=${PIPELINE_FOLDER}/staging \ --parameters "experiments=[shuffle_mode=\"service\"]" The job starts. The Dataflow UI reflects it: However, the logs showing the er

Google cloud dataflow Apache beam/Google dataflow enrich upstream records with downstream aggregates

I have created a Java apache beam stream pipeline that I plan to run on google dataflow. It receives elements that look similar to the following: ipAddress, serviceUsed, errorOrSuccess, time, parameter, etc. for example '237.98.58.248', 'service1', 'error', '12345', 'randomParameter', etc I currently window this data into fixed windows based on the event time. I would like to use my pipeline to calculate the number of errors and success each ip address received on a per window basis, and t

Google cloud dataflow Dataflow Pipeline Follows Notebook Execution Number. Cant Update Pipeline

I am trying to update my dataflow pipeline. I like developing using Jupyter notebooks on Google Cloud. However, I've run into this error when trying to update: "The new job is missing steps [5]: read/Read." I understand the reason is because I re-ran some lines in my notebook and added some new lines, so now instead of "[5]: read/Read" it is now "[23]: read/Read" but surely dataflow doesn't need to care about the jupyter notebook execution. Is there some sort of wa

Google cloud dataflow Dataflow wordcount example says I need to specify a --gcpTempLocation parameter

I ran the example wordcount java quickstart with maven command mvn -Pdataflow-runner compile exec:java \ -Dexec.mainClass=org.apache.beam.examples.WordCount \ -Dexec.args="--project=<PROJECT_ID> \ --stagingLocation=gs://<STORAGE_BUCKET>/staging/ \ --output=gs://<STORAGE_BUCKET>/output \ --runner=DataflowRunner \ --region=<REGION>" but I got this stack trace [INFO] Scanning for projects... [INFO] [INFO] --------------------< org.example:word-coun

Google cloud dataflow Does dataflow support custom triggers or updating trigger delays?

TL:DR; Is it possible to create a custom trigger that only fires if some flag is set? Is it possible to deploy the job with a trigger with a huge delay while we know a large data event is happening, and then deploy an update to the job with the trigger having a normal or no delay once that event is finished? Following on from: Remove duplicates across window triggers/firings The situation where this happens the most problematically (millions of duplicate firings) is when we're doing a backfill

Google cloud dataflow Total read time in Dataflow job has high variance

I have a Dataflow job that reads log files from my GCS bucket that are split by time and host machine. The structure of the bucket is as follows: /YYYY/MM/DD/HH/mm/HOST/*.gz Each job can end up consuming on the order of 10,000+ log files of around 10-100 KB in size. Normally our job takes approximately 5 minutes to complete. We at times see our jobs spike to 2-3x that amount of time, and find that the majority of the increase is seen in the work items related to reading the data files. How c

Google cloud dataflow How can I create a sequence generator in DataFlow?

Using PTransforms, I'm generating say 1 million objects. However, I need to label these objects with individual unique numbers from 1 to 1 million. There is NO specified order that these objets need to be generated, but the sequence number should range continuously from 1 to 1 million. So, I need what in a db world would be an "autonum" function or also known as a "sequence generator". Is there a way to accomplish thin in google cloud dataflow? The only ideas I came up with is to: a) store al

Google cloud dataflow easiest way to schedule a Google Cloud Dataflow job

I just need to run a dataflow pipeline on a daily basis, but it seems to me that suggested solutions like App Engine Cron Service, which requires building a whole web app, seems a bit too much. I was thinking about just running the pipeline from a cron job in a Compute Engine Linux VM, but maybe that's far too simple :). What's the problem with doing it that way, why isn't anybody (besides me I guess) suggesting it?

Google cloud dataflow Dataflow Write to File in Order of PCollection

I have a PCollection which holds KV and has only one key-value, the key has no meaning and the value holds an Iterable of KVs. The key of this inner KV is a number and the value of this KV is an Iterable of Strings. The PCollection is defined like this: PCollection<KV<String, Iterable<KV<Long, Iterable<String>>>>> I want to write to a file on a single machine : sorted by the number, for each number and for each string in that number, a row in the file. Using thi

Google cloud dataflow Status of DoFn inside ParDo

Do we have anything which can provide the completion status of DoFn like when the function has finished it's execution? Can we generate any trigger or anything which can give a fair idea about successful completion of all the steps written inside DoFn function? Any help will be appreciated.

Google cloud dataflow java.lang.IllegalStateException: Unable to return a default Coder

When I run my pipeline in eclipse it runs fine. But when I export pipeline as jar file and run that I get below error: Exception in thread "main" java.lang.IllegalStateException: Unable to return a default Coder for ToTableRow/ParMultiDo(ToTableRow).out0 [PCollection]. Correct one of the following root causes: No Coder has been manually specified; you may do so using .setCoder(). Inferring a Coder from the CoderRegistry failed: Unable to provide a Coder for com.google.api.servic

Google cloud dataflow Google Cloud Dataflow - Read in pubsub message as String

Is there a way to receive a message from pubsub in Java code and then turn that code into a String of some sort to be parsed for which can then be used as input variables in code? I have retrieved the pubsub message using PubSubIO, However I only know how to get it as a PCollection of Strings or as a side input using views, however I cannot find anyway to get an actual string out of this data for use in the Java code. Can anyone point me in the right direction? Thanks!

Google cloud dataflow Google DataFlow: attaching filename to the message

I'm trying to build Google DataFlow pipeline, which has these steps: Read from pub/sub topic a message which contains filename. Find in the google bucket file from filename read each line from the file send each line with filename as a single message to another topic My problem is that I can't add filename to the final output message. Current implementation: ConnectorOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(ConnectorOptions.class); Pipeline p = Pipeline.c

Google cloud dataflow Create a custom Sink in apache beam

I am using apache beam and trying to Create a custom sink , unfortunately cannot find any guides on how to create a custom sink .Can someone guide. Previously in Dataflow i used to override the Sink available in com.google.cloud.dataflow.sdk.io.Sink I cannot seem to find a similar calls in Beam.Is it still avialble in beam somewhere? I am using beam 2.3 sdk and Java

Google cloud dataflow How to Batch By N Elements in Streaming Pipeline With Small Bundles?

I've implemented batching by N elements as described in this answer: Can datastore input in google dataflow pipeline be processed in a batch of N entries at a time? package com.example.dataflow.transform; import com.example.dataflow.event.ClickEvent; import org.apache.beam.sdk.transforms.DoFn; import org.apache.beam.sdk.transforms.windowing.GlobalWindow; import org.joda.time.Instant; import java.util.ArrayList; import java.util.List; public class ClickToCli

Google cloud dataflow Cloud Dataflow what is the exact definition of freshness and latency?

Problem: When using Cloud Dataflow, we get presented 2 metrics (see this page): system latency data freshness These are also available in Stackdriver under the following names (extract from here): system_lag: The current maximum duration that an item of data has been awaiting processing, in seconds. data_watermark_age: The age (time since event timestamp) of the most recent item of data that has been fully processed by the pipeline. But, these descriptions are still very vague: what does &q

Google cloud dataflow Specifying correct Triggering for 10 minutes window + 5 minutes lateness buffer producing only 1 result

I'm creating a pipeline which ingests unbounded data source and does an aggregation computation. The computation is done in 10 minutes window based on event time and 5 minutes buffer for late-arriving events. I want to have the result of aggregation is emitted only once after that 10 minutes window and 5 minutes buffer passed. I don't know how to make the window only emit the result once. I believe the correct way is using AfterWatermark trigger but If I'm using withLateFirings() the result wil

Google cloud dataflow Google DataFlow sample TrafficStreamingMaxLaneFlow excution

I have succeeded to run the WordCount sample but failed to run TrafficStreamingMaxLaneFlow sample what arguments exactly I should use?. My command line: mvn exec:java -pl examples -Dexec.mainClass=com.google.cloud.dataflow.examples.TrafficStreamingMaxLaneFlow -Dexec.args="--project=sturdy-analyzer-658 --inputTopic=xxxInputTopic --dataset=xxxDataset --table=MIS --runner=BlockingDataflowPipelineRunner" The result: [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.1:java (de

Google cloud dataflow Conditional iterations in Google cloud dataflow

I am looking at the opportunities for implementing a data analysis algorithm using Google Cloud Dataflow. Mind you, I have no experience with dataflow yet. I am just doing some research on whether it can fulfill my needs. Part of my algorithm contains some conditional iterations, that is, continue until some condition is met: PCollection data = ... while(needsMoreWork(data)) { data = doAStep(data) } I have looked around in the documentation and as far as I can see I am only able to do "it

Google cloud dataflow Exceptions in Google Cloud Dataflow pipelines to Cloud Bigtable

Executing DataFlow pipelines, every once in a while we see those Exceptions. Is there anything we can do about them? We have a quite simple flow that reads from a file in GCS and creates a record per line in the input file - something about 1 million lines in the input file. Also what happens to data inside the pipeline? Is it reprocessed? Or is it lost in transit to BigTable? (609803d25ddab111): io.grpc.StatusRuntimeException: UNKNOWN at io.grpc.Status.asRuntimeException(Status.java:428) at i

Google cloud dataflow Dynamic table name when writing to BQ from dataflow pipelines

As a followup question to the following question and answer: https://stackoverflow.com/questions/31156774/about-key-grouping-with-groupbykey I'd like to confirm with google dataflow engineering team (@jkff) if the 3rd option proposed by Eugene is at all possible with google dataflow: "have a ParDo that takes these keys and creates the BigQuery tables, and another ParDo that takes the data and streams writes to the tables" My understanding is that ParDo/DoFn will process each element, how cou

Google cloud dataflow Detecting keyed state changes

I'm new to the Dataflow programming model and have some trouble wrapping my mind around what I think should be a simple use case: I have a pipeline reading live data from Pub/Sub, this data contains device statuses with (simplified) a serial number and a state (UP or DOWN). A device is guaranteed to send its state at least every 5 minutes, but then of course a device may send the same state multiple times. What I'm trying to achieve is a pipeline that only emits state changes for a device, so

Google cloud dataflow Combine.perKey receives empty groups when a Repeatedly trigger is used

I'm using Combine.perKey to combine multiple records into one in Dataflow. I'm finding that if I use the following window, my custom SerializableFunction sometimes receives an empty iterable. p.apply(TextIO.Read.from(INPUT)) .apply(ParDo.of(new ParseRecords())) .apply(Window.<Record>into(FixedWindows.of(Duration.standardHours(24))) .triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane() .plusDelayOf(Duration.standardMinutes(1)))) .discardingFiredPanes() .apply(ParD

Google cloud dataflow KafkaIO watermark not advancing

I have the below pipeline for testing purposes: pipeline.apply( "ReadFromKafka", KafkaIO .read() .withBootstrapServers("kafkabroker") .withoutMetadata() ).apply( "OutputCount" Window .into(FixedWindows.of(Duration.standardMinutes(1))) .triggering(AfterWatermark.pastEndOfWindow()) .discardingFiredPanes() .withAllowedLateness(Duration.ZERO) ).apply( "AccumlateCount", Combine.globally(Count.combineFn()).withoutDefaults() ).apply( "PrintCount", MapElement

Google cloud dataflow Best Practices in Http Calls in Cloud Dataflow - Java

What's the best practices when http calls from a DoFn, in a pipeline that will be running in Google Cloud Dataflow? (Java) I mean, if in a pure Java w/o using Beam, I need to think about things like async calls, or at least multithreading. think about manage the thread pool, connection pool... With Dataflow, what would happen if I just have one thread make sync call in each ProcessElement? What's the best practices to do http calls in the DoFn?

Google cloud dataflow Optimizing repeated transformations in Apache Beam/DataFlow

I wonder if Apache Beam.Google DataFlow is smart enough to recognize repeated transformations in the dataflow graph and run them only once. For example, if I have 2 branches: p | GroupByKey() | FlatMap(...) p | combiners.Top.PerKey(...) | FlatMap(...) both will involve grouping elements by key under the hood. Will the execution engine recognize that GroupByKey() has the same input in both cases and run it only once? Or do I need to manually ensure that GroupByKey() in this case proceeds all

Google cloud dataflow Google Dataflow spending hours estimating input size

I'm fairly new to Google Dataflow and I am finding that the service spends several hours estimating the input file size before actually processing data, and will often do several recounts for large input collections before failing. I'm using Apache Beam 2.9 and the io.ReadFromText method. The logs start with a comment about beginning estimation of input file size and continue to log an update every 10k files counted. Is there a way to skip this step or to significantly increase the pace i

Google cloud dataflow BeamSQL Group By query problem with Float value

Tried to get the unique value from the BigQuery table using BeamSQL in Google Dataflow. Using Group By clause implemented the condition in BeamSQL (sample query below). One of the column has float data type. While executing the Job got below exceptions, Caused by: org.apache.beam.sdk.coders.Coder$NonDeterministicException: org.apache.beam.sdk.coders.RowCoder@81d6d10 is not deterministic because: All fields must have deterministic encoding. Caused by: org.apache.beam.sdk.coders.Coder$N

Google cloud dataflow Apache Beam that uses time as an input

I'm looking to create a Beam input that executes every second and just outputs the time as an input. I know I can do a pcollection that is from numbers like this p.apply(Create.of(1, 2, 3, 4, 5)) .setCoder(VarIntCoder.of()) and I could just create a really large array of numbers and window them to be every second, but is there a better way to do this? Thanks.

Google cloud dataflow Dataflow stream python windowing

i am new in using dataflow. I have following logic : Event is added to pubsub Dataflow reads pubsub and gets the event From event i am looking into MySQL to find relations in which segments this event have relation and list of relations is returned with this step. This segments are independent from one another. Each segment can be divided to two tables in MySQL results for email and mobile and they are independent as well. Each segment have rules that can be 1 to n . I would like to process thi

Google cloud dataflow Stateful processing in Apache Beam with key-value states

I am trying to implement a stateful process with Apache Beam. I've gone through both Kenneth Knowles articles (Stateful processing with Apache Beam and Timely (and Stateful) Processing with Apache Beam), but I didn't find a solution to my issue. I am using the Python SDK. In particular, I am trying to have a stateful DoFn that contains key-value objects and I need to add new elements and sometimes remove some. I saw a solution may be to use a SetStateSpec with Tuple coder inside my DoFn class. T

  1    2   3   4   5   6  ... 下一页 共 6 页