Created
May 31, 2017 11:03
-
-
Save mawolf/b6fc026162825557cc70eec0244355f2 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2017-05-31 10:00:46,169 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:01:55,730 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:01:55,773 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 42 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 320 | |
2017-05-31 10:01:55,773 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 42 milliseconds | |
2017-05-31 10:02:24,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:02:24,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 68 statistics events to JMS. Sent 68 batch events to JMS | |
2017-05-31 10:02:46,184 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 13 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:03:55,773 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:03:55,816 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 42 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 22 millis), max Transaction ID 320 | |
2017-05-31 10:03:55,816 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 42 milliseconds | |
2017-05-31 10:04:46,189 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 4 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:05:55,816 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:05:55,853 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 17 millis), max Transaction ID 320 | |
2017-05-31 10:05:55,853 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds | |
2017-05-31 10:06:46,194 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 4 milliseconds (Stop-the-world time = 0 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:07:34,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:07:34,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 68 statistics events to JMS. Sent 68 batch events to JMS | |
2017-05-31 10:07:55,853 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:07:55,890 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 320 | |
2017-05-31 10:07:55,890 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds | |
2017-05-31 10:08:46,200 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 6 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:09:55,890 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:09:55,927 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 320 | |
2017-05-31 10:09:55,927 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds | |
2017-05-31 10:10:46,205 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:11:55,927 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:11:55,963 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 15 millis), max Transaction ID 320 | |
2017-05-31 10:11:55,963 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds | |
2017-05-31 10:12:44,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:12:44,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 68 statistics events to JMS. Sent 68 batch events to JMS | |
2017-05-31 10:12:46,211 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:13:55,964 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:13:56,016 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 51 milliseconds (Stop-the-world time = 12 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 320 | |
2017-05-31 10:13:56,016 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 52 milliseconds | |
2017-05-31 10:14:46,216 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 4 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:15:56,016 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:15:56,054 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 38 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 320 | |
2017-05-31 10:15:56,054 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 38 milliseconds | |
2017-05-31 10:16:46,221 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:17:54,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:17:54,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 68 statistics events to JMS. Sent 68 batch events to JMS | |
2017-05-31 10:17:56,054 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:17:56,090 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 35 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 15 millis), max Transaction ID 320 | |
2017-05-31 10:17:56,090 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 35 milliseconds | |
2017-05-31 10:18:16,446 INFO [NiFi Web Server-678] o.a.n.controller.StandardProcessorNode Stopping processor: class com.thinkbiganalytics.nifi.v2.spark.ExecuteSparkJob | |
2017-05-31 10:18:16,446 INFO [StandardProcessScheduler Thread-8] o.a.n.c.s.TimerDrivenSchedulingAgent Stopped scheduling ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] to run | |
2017-05-31 10:18:17,658 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@44270eff // Another save pending = false | |
2017-05-31 10:18:25,439 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@44270eff // Another save pending = false | |
2017-05-31 10:18:28,854 INFO [StandardProcessScheduler Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] to run with 10 threads | |
2017-05-31 10:18:29,741 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@44270eff // Another save pending = false | |
2017-05-31 10:18:45,038 INFO [Timer-Driven Process Thread-9] org.apache.hadoop.io.compress.CodecPool Got brand-new compressor [.bz2] | |
2017-05-31 10:18:45,796 INFO [Timer-Driven Process Thread-7] c.t.nifi.v2.common.FeedProcessor Resolving ID for feed test/users | |
2017-05-31 10:18:45,814 INFO [Timer-Driven Process Thread-7] c.t.nifi.v2.common.FeedProcessor Resolving id 90eb8a5b-13af-48c7-b533-b08968a3b472 for feed test/users | |
2017-05-31 10:18:45,980 INFO [Timer-Driven Process Thread-6] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Adding to class path '/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar' | |
2017-05-31 10:18:45,984 INFO [Timer-Driven Process Thread-6] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Waiting for Spark job to complete | |
2017-05-31 10:18:46,228 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 6 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 634 | |
2017-05-31 10:18:46,648 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:46 INFO Validator: Running Spark Validator with the following command line args (comma separated):test,users,1496225925786,/tmp/kylo-nifi/spark/test/users/1496225925786/users_field_policy.json | |
2017-05-31 10:18:46,893 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:46 INFO AnnotationConfigApplicationContext: Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@66441260: startup date [Wed May 31 10:18:46 UTC 2017]; root of context hierarchy | |
2017-05-31 10:18:46,961 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:46 INFO AutowiredAnnotationBeanPostProcessor: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring | |
2017-05-31 10:18:47,105 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:47 INFO SparkContext: Running Spark version 1.6.2 | |
2017-05-31 10:18:47,581 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:47 INFO SecurityManager: Changing view acls to: nifi | |
2017-05-31 10:18:47,582 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:47 INFO SecurityManager: Changing modify acls to: nifi | |
2017-05-31 10:18:47,582 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(nifi); users with modify permissions: Set(nifi) | |
2017-05-31 10:18:48,038 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Utils: Successfully started service 'sparkDriver' on port 44241. | |
2017-05-31 10:18:48,345 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Slf4jLogger: Slf4jLogger started | |
2017-05-31 10:18:48,380 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Remoting: Starting remoting | |
2017-05-31 10:18:48,502 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:39066] | |
2017-05-31 10:18:48,507 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 39066. | |
2017-05-31 10:18:48,516 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO SparkEnv: Registering MapOutputTracker | |
2017-05-31 10:18:48,532 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO SparkEnv: Registering BlockManagerMaster | |
2017-05-31 10:18:48,543 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-dcb1d64f-8692-48cc-8a76-e99fdba4a2b1 | |
2017-05-31 10:18:48,557 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO MemoryStore: MemoryStore started with capacity 511.5 MB | |
2017-05-31 10:18:48,612 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO SparkEnv: Registering OutputCommitCoordinator | |
2017-05-31 10:18:48,726 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Server: jetty-8.y.z-SNAPSHOT | |
2017-05-31 10:18:48,766 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO AbstractConnector: Started [email protected]:4040 | |
2017-05-31 10:18:48,766 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Utils: Successfully started service 'SparkUI' on port 4040. | |
2017-05-31 10:18:48,768 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://172.31.47.159:4040 | |
2017-05-31 10:18:48,795 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO HttpFileServer: HTTP File server directory is /tmp/spark-f9280ff2-682c-4624-8b14-747c1bdd897e/httpd-655f4053-425e-4bc6-b0a6-2ad5f4a033db | |
2017-05-31 10:18:48,797 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO HttpServer: Starting HTTP Server | |
2017-05-31 10:18:48,802 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Server: jetty-8.y.z-SNAPSHOT | |
2017-05-31 10:18:48,804 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO AbstractConnector: Started [email protected]:35883 | |
2017-05-31 10:18:48,804 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Utils: Successfully started service 'HTTP file server' on port 35883. | |
2017-05-31 10:18:48,816 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO SparkContext: Added JAR file:/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar at http://172.31.47.159:35883/jars/hive-hcatalog-core.jar with timestamp 1496225928815 | |
2017-05-31 10:18:48,836 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO SparkContext: Added JAR file:/opt/nifi/current/lib/app/kylo-spark-validate-cleanse-jar-with-dependencies.jar at http://172.31.47.159:35883/jars/kylo-spark-validate-cleanse-jar-with-dependencies.jar with timestamp 1496225928836 | |
2017-05-31 10:18:48,918 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Executor: Starting executor ID driver on host localhost | |
2017-05-31 10:18:48,931 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45313. | |
2017-05-31 10:18:48,931 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO NettyBlockTransferService: Server created on 45313 | |
2017-05-31 10:18:48,932 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO BlockManagerMaster: Trying to register BlockManager | |
2017-05-31 10:18:48,934 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO BlockManagerMasterEndpoint: Registering block manager localhost:45313 with 511.5 MB RAM, BlockManagerId(driver, localhost, 45313) | |
2017-05-31 10:18:48,936 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:48 INFO BlockManagerMaster: Registered BlockManager | |
2017-05-31 10:18:49,098 INFO [Provenance Maintenance Thread-3] o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers for events starting with ID 381 | |
2017-05-31 10:18:49,120 INFO [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Successfully merged 16 journal files (16 records) into single Provenance Log File /opt/nifi/data/provenance_repository/365.prov in 23 milliseconds | |
2017-05-31 10:18:49,120 INFO [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Successfully Rolled over Provenance Event file containing 7111 records | |
2017-05-31 10:18:49,713 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:49 INFO EventLoggingListener: Logging events to hdfs:///spark-history/local-1496225928894 | |
2017-05-31 10:18:50,382 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:50 INFO HiveContext: Initializing execution hive, version 1.2.1 | |
2017-05-31 10:18:50,422 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:50 INFO ClientWrapper: Inspected Hadoop version: 2.7.3.2.5.3.0-37 | |
2017-05-31 10:18:50,423 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:50 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.7.3.2.5.3.0-37 | |
2017-05-31 10:18:50,696 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:50 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
2017-05-31 10:18:50,716 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:50 INFO ObjectStore: ObjectStore, initialize called | |
2017-05-31 10:18:50,860 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:50 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
2017-05-31 10:18:50,860 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:50 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
2017-05-31 10:18:51,594 INFO [Timer-Driven Process Thread-3] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: KyloFlowCache Sync Id: 32113153-68e9-47fa-aa30-da5dcd86d02a . Attempting to process 16 events starting with event id: 365. Splitting into 1 batches of 500 each | |
2017-05-31 10:18:51,616 INFO [Timer-Driven Process Thread-3] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING Events to JMS ProvenanceEventRecordDTOHolder{events=16} | |
2017-05-31 10:18:51,616 INFO [Timer-Driven Process Thread-3] c.t.activemq.SendJmsMessage Sending ActiveMQ message [ProvenanceEventRecordDTOHolder{events=16}] to queue [thinkbig.feed-manager] | |
2017-05-31 10:18:51,622 INFO [Timer-Driven Process Thread-3] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING AGGREGATED STAT to JMS AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:18:44.971Z, maxTime=2017-05-31T10:18:45.979Z, collectionId='2c1f0963-0305-4aef-9912-ce932534a2e3', eventCount=16} | |
2017-05-31 10:18:51,623 INFO [Timer-Driven Process Thread-3] c.t.activemq.SendJmsMessage Sending ActiveMQ message [AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:18:44.971Z, maxTime=2017-05-31T10:18:45.979Z, collectionId='2c1f0963-0305-4aef-9912-ce932534a2e3', eventCount=16}] to queue [thinkbig.provenance-event-stats] | |
2017-05-31 10:18:51,638 INFO [Timer-Driven Process Thread-3] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] ProvenanceEventPool: Pool Stats: Created:[22], Borrowed:[84] | |
2017-05-31 10:18:51,639 INFO [Timer-Driven Process Thread-3] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: ReportingTask finished. Last Event id: 380. Total time to process 16 events was 45 ms. Total time spent querying for events in Nifi was 12 ms. Kylo ProcessingTime: 33 ms | |
2017-05-31 10:18:51,639 INFO [Timer-Driven Process Thread-3] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Finished. Last Event Recorded was: 380 | |
2017-05-31 10:18:52,731 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:52 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
2017-05-31 10:18:53,357 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:53 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:18:53,358 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:53 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:18:54,491 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:54 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:18:54,491 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:54 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:18:54,749 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:54 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
2017-05-31 10:18:54,751 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:54 INFO ObjectStore: Initialized ObjectStore | |
2017-05-31 10:18:54,850 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:54 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
2017-05-31 10:18:54,948 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:54 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
2017-05-31 10:18:55,052 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO HiveMetaStore: Added admin role in metastore | |
2017-05-31 10:18:55,055 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO HiveMetaStore: Added public role in metastore | |
2017-05-31 10:18:55,143 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
2017-05-31 10:18:55,227 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO HiveMetaStore: 0: get_all_databases | |
2017-05-31 10:18:55,228 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO audit: ugi=nifi ip=unknown-ip-addr cmd=get_all_databases | |
2017-05-31 10:18:55,242 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
2017-05-31 10:18:55,242 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO audit: ugi=nifi ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
2017-05-31 10:18:55,243 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:18:55,400 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO SessionState: Created local directory: /tmp/15b57157-ad36-4471-b049-b7ed781e161c_resources | |
2017-05-31 10:18:55,406 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO SessionState: Created HDFS directory: /tmp/hive/nifi/15b57157-ad36-4471-b049-b7ed781e161c | |
2017-05-31 10:18:55,408 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO SessionState: Created local directory: /tmp/nifi/15b57157-ad36-4471-b049-b7ed781e161c | |
2017-05-31 10:18:55,410 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO SessionState: Created HDFS directory: /tmp/hive/nifi/15b57157-ad36-4471-b049-b7ed781e161c/_tmp_space.db | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 ERROR Validator: Failed to perform validation | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Permission denied | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:204) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:225) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:215) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:458) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:272) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:271) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.Iterator$class.foreach(Iterator.scala:727) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.AbstractIterable.foreach(Iterable.scala:54) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:271) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90) | |
2017-05-31 10:18:55,413 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at com.thinkbiganalytics.spark.datavalidator.Validator.doValidate(Validator.java:166) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at com.thinkbiganalytics.spark.datavalidator.Validator.main(Validator.java:141) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at java.lang.reflect.Method.invoke(Method.java:606) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Caused by: java.lang.RuntimeException: java.io.IOException: Permission denied | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:515) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] ... 25 more | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Caused by: java.io.IOException: Permission denied | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at java.io.UnixFileSystem.createFileExclusively(Native Method) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at java.io.File.createTempFile(File.java:2001) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:818) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513) | |
2017-05-31 10:18:55,414 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] ... 25 more | |
2017-05-31 10:18:55,415 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO SparkContext: Invoking stop() from shutdown hook | |
2017-05-31 10:18:55,438 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/sql,null} | |
2017-05-31 10:18:55,438 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution/json,null} | |
2017-05-31 10:18:55,438 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution,null} | |
2017-05-31 10:18:55,438 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/json,null} | |
2017-05-31 10:18:55,438 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL,null} | |
2017-05-31 10:18:55,439 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null} | |
2017-05-31 10:18:55,439 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null} | |
2017-05-31 10:18:55,439 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null} | |
2017-05-31 10:18:55,439 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null} | |
2017-05-31 10:18:55,439 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null} | |
2017-05-31 10:18:55,439 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null} | |
2017-05-31 10:18:55,439 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null} | |
2017-05-31 10:18:55,440 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null} | |
2017-05-31 10:18:55,440 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null} | |
2017-05-31 10:18:55,440 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null} | |
2017-05-31 10:18:55,440 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null} | |
2017-05-31 10:18:55,440 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null} | |
2017-05-31 10:18:55,441 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null} | |
2017-05-31 10:18:55,441 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null} | |
2017-05-31 10:18:55,441 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null} | |
2017-05-31 10:18:55,441 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null} | |
2017-05-31 10:18:55,441 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null} | |
2017-05-31 10:18:55,442 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null} | |
2017-05-31 10:18:55,442 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null} | |
2017-05-31 10:18:55,442 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null} | |
2017-05-31 10:18:55,442 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null} | |
2017-05-31 10:18:55,442 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null} | |
2017-05-31 10:18:55,443 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null} | |
2017-05-31 10:18:55,443 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null} | |
2017-05-31 10:18:55,443 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null} | |
2017-05-31 10:18:55,496 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO SparkUI: Stopped Spark web UI at http://172.31.47.159:4040 | |
2017-05-31 10:18:55,520 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
2017-05-31 10:18:55,523 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO MemoryStore: MemoryStore cleared | |
2017-05-31 10:18:55,524 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO BlockManager: BlockManager stopped | |
2017-05-31 10:18:55,528 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO BlockManagerMaster: BlockManagerMaster stopped | |
2017-05-31 10:18:55,530 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
2017-05-31 10:18:55,533 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO SparkContext: Successfully stopped SparkContext | |
2017-05-31 10:18:55,533 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ShutdownHookManager: Shutdown hook called | |
2017-05-31 10:18:55,533 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ShutdownHookManager: Deleting directory /tmp/spark-2e568502-c490-48db-8a2d-22bbfc3f5d16 | |
2017-05-31 10:18:55,534 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
2017-05-31 10:18:55,536 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
2017-05-31 10:18:55,538 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ShutdownHookManager: Deleting directory /tmp/spark-f9280ff2-682c-4624-8b14-747c1bdd897e/httpd-655f4053-425e-4bc6-b0a6-2ad5f4a033db | |
2017-05-31 10:18:55,541 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO ShutdownHookManager: Deleting directory /tmp/spark-f9280ff2-682c-4624-8b14-747c1bdd897e | |
2017-05-31 10:18:55,559 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:18:55 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
2017-05-31 10:18:55,981 ERROR [Timer-Driven Process Thread-6] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] ExecuteSparkJob for Validate And Split Records and flowfile: StandardFlowFileRecord[uuid=22e2de21-bce5-4c64-8a4a-9ec5bc791014,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1496223247528-3, container=default, section=3], offset=721641, length=143371],offset=0,name=userdata1.csv,size=143371] completed with failed status 1 | |
2017-05-31 10:19:29,100 INFO [Provenance Maintenance Thread-2] o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers for events starting with ID 384 | |
2017-05-31 10:19:29,113 INFO [Provenance Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully merged 16 journal files (3 records) into single Provenance Log File /opt/nifi/data/provenance_repository/381.prov in 13 milliseconds | |
2017-05-31 10:19:29,113 INFO [Provenance Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully Rolled over Provenance Event file containing 216 records | |
2017-05-31 10:19:31,739 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: KyloFlowCache Sync Id: 32113153-68e9-47fa-aa30-da5dcd86d02a . Attempting to process 3 events starting with event id: 381. Splitting into 1 batches of 500 each | |
2017-05-31 10:19:31,749 INFO [Timer-Driven Process Thread-8] c.t.n.p.cache.FeedFlowFileCacheUtil Ending the Job for Feed test.users and flowfile: 22e2de21-bce5-4c64-8a4a-9ec5bc791014. Event: ProvenanceEventRecordDTO{eventId=383, processorName=Failed Flow, componentId=2b1f1ef0-4e08-1bd6-bef7-407e37852fde, flowFile=22e2de21-bce5-4c64-8a4a-9ec5bc791014, previous=null, eventType=DROP, eventDetails=Auto-Terminated by success Relationship, isEndOfJob=true, isBatch=true, isStream=false, feed=test.users} | |
2017-05-31 10:19:31,749 INFO [Timer-Driven Process Thread-8] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING Events to JMS ProvenanceEventRecordDTOHolder{events=3} | |
2017-05-31 10:19:31,749 INFO [Timer-Driven Process Thread-8] c.t.activemq.SendJmsMessage Sending ActiveMQ message [ProvenanceEventRecordDTOHolder{events=3}] to queue [thinkbig.feed-manager] | |
2017-05-31 10:19:31,753 INFO [Timer-Driven Process Thread-8] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING AGGREGATED STAT to JMS AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:18:55.982Z, maxTime=2017-05-31T10:18:55.990Z, collectionId='a29e55ee-6a31-4890-ab21-e9b8e36a8951', eventCount=3} | |
2017-05-31 10:19:31,753 INFO [Timer-Driven Process Thread-8] c.t.activemq.SendJmsMessage Sending ActiveMQ message [AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:18:55.982Z, maxTime=2017-05-31T10:18:55.990Z, collectionId='a29e55ee-6a31-4890-ab21-e9b8e36a8951', eventCount=3}] to queue [thinkbig.provenance-event-stats] | |
2017-05-31 10:19:31,755 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] ProvenanceEventPool: Pool Stats: Created:[22], Borrowed:[87] | |
2017-05-31 10:19:31,755 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: ReportingTask finished. Last Event id: 383. Total time to process 3 events was 16 ms. Total time spent querying for events in Nifi was 10 ms. Kylo ProcessingTime: 6 ms | |
2017-05-31 10:19:31,755 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Finished. Last Event Recorded was: 383 | |
2017-05-31 10:19:34,191 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache Time to expire 1 flowfile and all references 1 ms. FeedFlowFile and references left in cache: 0 | |
2017-05-31 10:19:56,090 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:19:56,126 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 35 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 15 millis), max Transaction ID 339 | |
2017-05-31 10:19:56,126 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 35 milliseconds | |
2017-05-31 10:20:46,233 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 4 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:21:56,126 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:21:56,162 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 12 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 339 | |
2017-05-31 10:21:56,162 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds | |
2017-05-31 10:22:46,238 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:23:04,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:23:04,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 87 statistics events to JMS. Sent 87 batch events to JMS | |
2017-05-31 10:23:56,162 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:23:56,215 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 52 milliseconds (Stop-the-world time = 12 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 339 | |
2017-05-31 10:23:56,215 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 52 milliseconds | |
2017-05-31 10:24:46,244 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:25:07,458 INFO [Provenance Maintenance Thread-1] o.a.n.p.lucene.UpdateMinimumEventId Updated Minimum Event ID for Provenance Event Repository - Minimum Event ID now 87 | |
2017-05-31 10:25:07,459 INFO [Provenance Maintenance Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully performed Expiration Action org.apache.nifi.provenance.lucene.UpdateMinimumEventId@431edceb on Provenance Event file /opt/nifi/data/provenance_repository/84.prov.gz in 233074 nanos | |
2017-05-31 10:25:07,459 INFO [Provenance Maintenance Thread-1] o.a.n.p.lucene.DeleteIndexAction Removed expired Provenance Event file /opt/nifi/data/provenance_repository/84.prov.gz | |
2017-05-31 10:25:07,459 INFO [Provenance Maintenance Thread-1] o.a.n.p.lucene.DeleteIndexAction Removed expired Provenance Table-of-Contents file /opt/nifi/data/provenance_repository/toc/84.toc | |
2017-05-31 10:25:07,459 INFO [Provenance Maintenance Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully performed Expiration Action org.apache.nifi.provenance.expiration.FileRemovalAction@2efea202 on Provenance Event file /opt/nifi/data/provenance_repository/84.prov.gz in 124233 nanos | |
2017-05-31 10:25:56,215 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:25:56,256 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 41 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 339 | |
2017-05-31 10:25:56,256 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 41 milliseconds | |
2017-05-31 10:26:46,263 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 19 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:27:56,256 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:27:56,299 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 42 milliseconds (Stop-the-world time = 19 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 339 | |
2017-05-31 10:27:56,299 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 42 milliseconds | |
2017-05-31 10:28:14,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:28:14,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 87 statistics events to JMS. Sent 87 batch events to JMS | |
2017-05-31 10:28:46,280 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 16 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:29:56,299 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:29:56,339 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 40 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 339 | |
2017-05-31 10:29:56,340 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 40 milliseconds | |
2017-05-31 10:30:46,286 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 6 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:31:56,340 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:31:56,376 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 35 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 15 millis), max Transaction ID 339 | |
2017-05-31 10:31:56,376 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 35 milliseconds | |
2017-05-31 10:32:46,292 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:33:24,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:33:24,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 87 statistics events to JMS. Sent 87 batch events to JMS | |
2017-05-31 10:33:56,376 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:33:56,412 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 35 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 339 | |
2017-05-31 10:33:56,412 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 35 milliseconds | |
2017-05-31 10:34:47,627 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 1335 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:35:56,412 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:35:56,467 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 55 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 339 | |
2017-05-31 10:35:56,467 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 55 milliseconds | |
2017-05-31 10:36:47,640 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 12 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:37:56,467 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:37:56,503 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 339 | |
2017-05-31 10:37:56,503 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds | |
2017-05-31 10:38:34,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:38:34,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 87 statistics events to JMS. Sent 87 batch events to JMS | |
2017-05-31 10:38:48,043 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 403 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:39:56,504 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:39:56,539 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 35 milliseconds (Stop-the-world time = 12 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 339 | |
2017-05-31 10:39:56,540 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 35 milliseconds | |
2017-05-31 10:40:19,826 ERROR [Timer-Driven Process Thread-6] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Error checking to see if Kylo is available. Please ensure Kylo is up and running. | |
2017-05-31 10:40:19,826 INFO [Timer-Driven Process Thread-6] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Kylo is not available to process requests yet. This task will exit and wait for its next schedule interval. | |
2017-05-31 10:40:19,826 INFO [Timer-Driven Process Thread-6] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Aborted. Last Event Recorded was: 383 | |
2017-05-31 10:40:24,832 ERROR [Timer-Driven Process Thread-1] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Error checking to see if Kylo is available. Please ensure Kylo is up and running. | |
2017-05-31 10:40:24,832 INFO [Timer-Driven Process Thread-1] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Kylo is not available to process requests yet. This task will exit and wait for its next schedule interval. | |
2017-05-31 10:40:24,832 INFO [Timer-Driven Process Thread-1] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Aborted. Last Event Recorded was: 383 | |
2017-05-31 10:40:29,836 ERROR [Timer-Driven Process Thread-2] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Error checking to see if Kylo is available. Please ensure Kylo is up and running. | |
2017-05-31 10:40:29,836 INFO [Timer-Driven Process Thread-2] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Kylo is not available to process requests yet. This task will exit and wait for its next schedule interval. | |
2017-05-31 10:40:29,836 INFO [Timer-Driven Process Thread-2] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Aborted. Last Event Recorded was: 383 | |
2017-05-31 10:40:34,839 ERROR [Timer-Driven Process Thread-3] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Error checking to see if Kylo is available. Please ensure Kylo is up and running. | |
2017-05-31 10:40:34,839 INFO [Timer-Driven Process Thread-3] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Kylo is not available to process requests yet. This task will exit and wait for its next schedule interval. | |
2017-05-31 10:40:34,839 INFO [Timer-Driven Process Thread-3] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Aborted. Last Event Recorded was: 383 | |
2017-05-31 10:40:40,309 ERROR [Timer-Driven Process Thread-1] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Error checking to see if Kylo is available. Please ensure Kylo is up and running. | |
2017-05-31 10:40:40,310 INFO [Timer-Driven Process Thread-1] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Kylo is not available to process requests yet. This task will exit and wait for its next schedule interval. | |
2017-05-31 10:40:40,310 INFO [Timer-Driven Process Thread-1] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Aborted. Last Event Recorded was: 383 | |
2017-05-31 10:40:45,334 ERROR [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Error checking to see if Kylo is available. Please ensure Kylo is up and running. | |
2017-05-31 10:40:45,334 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Kylo is not available to process requests yet. This task will exit and wait for its next schedule interval. | |
2017-05-31 10:40:45,334 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Aborted. Last Event Recorded was: 383 | |
2017-05-31 10:40:48,052 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 8 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 640 | |
2017-05-31 10:40:48,224 INFO [Timer-Driven Process Thread-8] org.apache.hadoop.io.compress.CodecPool Got brand-new compressor [.bz2] | |
2017-05-31 10:40:49,131 INFO [Provenance Maintenance Thread-1] o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers for events starting with ID 392 | |
2017-05-31 10:40:49,166 INFO [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Successfully merged 16 journal files (8 records) into single Provenance Log File /opt/nifi/data/provenance_repository/384.prov in 35 milliseconds | |
2017-05-31 10:40:49,166 INFO [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Successfully Rolled over Provenance Event file containing 6761 records | |
2017-05-31 10:40:49,180 INFO [Timer-Driven Process Thread-5] c.t.nifi.v2.common.FeedProcessor Resolving ID for feed test/users | |
2017-05-31 10:40:49,228 ERROR [Timer-Driven Process Thread-5] c.t.nifi.v2.common.FeedProcessor Failed to retrieve feed ID | |
org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error | |
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:94) ~[na:na] | |
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:641) ~[na:na] | |
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:597) ~[na:na] | |
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:572) ~[na:na] | |
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:523) ~[na:na] | |
at com.thinkbiganalytics.metadata.rest.client.MetadataClient.get(MetadataClient.java:661) ~[na:na] | |
at com.thinkbiganalytics.metadata.rest.client.MetadataClient.getFeeds(MetadataClient.java:363) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.core.metadata.MetadataClientProvider.getFeedId(MetadataClientProvider.java:89) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.common.FeedProcessor.ensureFeedId(FeedProcessor.java:119) [kylo-nifi-core-processors-0.7.1.jar:na] | |
at com.thinkbiganalytics.nifi.v2.common.FeedProcessor.initialize(FeedProcessor.java:64) [kylo-nifi-core-processors-0.7.1.jar:na] | |
at com.thinkbiganalytics.nifi.v2.init.InitializeFeed.onTrigger(InitializeFeed.java:115) [kylo-nifi-core-processors-0.7.1.jar:na] | |
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_92] | |
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_92] | |
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_92] | |
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_92] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_92] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_92] | |
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92] | |
2017-05-31 10:40:49,228 ERROR [Timer-Driven Process Thread-5] c.t.nifi.v2.init.InitializeFeed InitializeFeed[id=2b1f1ef0-4e16-1bd6-19b1-161b4e2abf86] InitializeFeed[id=2b1f1ef0-4e16-1bd6-19b1-161b4e2abf86] failed to process due to org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error; rolling back session: org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error | |
2017-05-31 10:40:49,234 ERROR [Timer-Driven Process Thread-5] c.t.nifi.v2.init.InitializeFeed | |
org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error | |
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:94) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:641) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:597) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:572) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:523) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at com.thinkbiganalytics.metadata.rest.client.MetadataClient.get(MetadataClient.java:661) ~[na:na] | |
at com.thinkbiganalytics.metadata.rest.client.MetadataClient.getFeeds(MetadataClient.java:363) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.core.metadata.MetadataClientProvider.getFeedId(MetadataClientProvider.java:89) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.common.FeedProcessor.ensureFeedId(FeedProcessor.java:119) ~[kylo-nifi-core-processors-0.7.1.jar:na] | |
at com.thinkbiganalytics.nifi.v2.common.FeedProcessor.initialize(FeedProcessor.java:64) ~[kylo-nifi-core-processors-0.7.1.jar:na] | |
at com.thinkbiganalytics.nifi.v2.init.InitializeFeed.onTrigger(InitializeFeed.java:115) ~[kylo-nifi-core-processors-0.7.1.jar:na] | |
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_92] | |
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_92] | |
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_92] | |
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_92] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_92] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_92] | |
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92] | |
2017-05-31 10:40:49,235 ERROR [Timer-Driven Process Thread-5] c.t.nifi.v2.init.InitializeFeed InitializeFeed[id=2b1f1ef0-4e16-1bd6-19b1-161b4e2abf86] InitializeFeed[id=2b1f1ef0-4e16-1bd6-19b1-161b4e2abf86] failed to process session due to org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error: org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error | |
2017-05-31 10:40:49,238 ERROR [Timer-Driven Process Thread-5] c.t.nifi.v2.init.InitializeFeed | |
org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error | |
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:94) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:641) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:597) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:572) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:523) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at com.thinkbiganalytics.metadata.rest.client.MetadataClient.get(MetadataClient.java:661) ~[na:na] | |
at com.thinkbiganalytics.metadata.rest.client.MetadataClient.getFeeds(MetadataClient.java:363) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.core.metadata.MetadataClientProvider.getFeedId(MetadataClientProvider.java:89) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.common.FeedProcessor.ensureFeedId(FeedProcessor.java:119) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.common.FeedProcessor.initialize(FeedProcessor.java:64) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.init.InitializeFeed.onTrigger(InitializeFeed.java:115) ~[na:na] | |
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064) ~[nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_92] | |
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_92] | |
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_92] | |
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_92] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_92] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_92] | |
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92] | |
2017-05-31 10:40:49,238 WARN [Timer-Driven Process Thread-5] c.t.nifi.v2.init.InitializeFeed InitializeFeed[id=2b1f1ef0-4e16-1bd6-19b1-161b4e2abf86] Processor Administratively Yielded for 1 sec due to processing failure | |
2017-05-31 10:40:49,238 WARN [Timer-Driven Process Thread-5] o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding InitializeFeed[id=2b1f1ef0-4e16-1bd6-19b1-161b4e2abf86] due to uncaught Exception: org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error | |
2017-05-31 10:40:49,241 WARN [Timer-Driven Process Thread-5] o.a.n.c.t.ContinuallyRunProcessorTask | |
org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error | |
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:94) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:641) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:597) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:572) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:523) ~[spring-web-4.2.4.RELEASE.jar:4.2.4.RELEASE] | |
at com.thinkbiganalytics.metadata.rest.client.MetadataClient.get(MetadataClient.java:661) ~[na:na] | |
at com.thinkbiganalytics.metadata.rest.client.MetadataClient.getFeeds(MetadataClient.java:363) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.core.metadata.MetadataClientProvider.getFeedId(MetadataClientProvider.java:89) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.common.FeedProcessor.ensureFeedId(FeedProcessor.java:119) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.common.FeedProcessor.initialize(FeedProcessor.java:64) ~[na:na] | |
at com.thinkbiganalytics.nifi.v2.init.InitializeFeed.onTrigger(InitializeFeed.java:115) ~[na:na] | |
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064) ~[nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.0.0.jar:1.0.0] | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_92] | |
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_92] | |
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_92] | |
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_92] | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_92] | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_92] | |
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92] | |
2017-05-31 10:40:50,346 ERROR [Timer-Driven Process Thread-9] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Error checking to see if Kylo is available. Please ensure Kylo is up and running. | |
2017-05-31 10:40:50,346 INFO [Timer-Driven Process Thread-9] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Kylo is not available to process requests yet. This task will exit and wait for its next schedule interval. | |
2017-05-31 10:40:50,346 INFO [Timer-Driven Process Thread-9] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Aborted. Last Event Recorded was: 383 | |
2017-05-31 10:40:55,378 ERROR [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Error checking to see if Kylo is available. Please ensure Kylo is up and running. | |
2017-05-31 10:40:55,378 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Kylo is not available to process requests yet. This task will exit and wait for its next schedule interval. | |
2017-05-31 10:40:55,378 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Aborted. Last Event Recorded was: 383 | |
2017-05-31 10:41:01,421 INFO [Timer-Driven Process Thread-2] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: KyloFlowCache Sync Id: 32113153-68e9-47fa-aa30-da5dcd86d02a . Attempting to process 8 events starting with event id: 384. Splitting into 1 batches of 500 each | |
2017-05-31 10:41:01,497 INFO [Timer-Driven Process Thread-2] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING Events to JMS ProvenanceEventRecordDTOHolder{events=8} | |
2017-05-31 10:41:01,497 INFO [Timer-Driven Process Thread-2] c.t.activemq.SendJmsMessage Sending ActiveMQ message [ProvenanceEventRecordDTOHolder{events=8}] to queue [thinkbig.feed-manager] | |
2017-05-31 10:41:01,505 INFO [Timer-Driven Process Thread-2] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING AGGREGATED STAT to JMS AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:40:48.110Z, maxTime=2017-05-31T10:40:48.705Z, collectionId='b3b513c2-a2ac-47d3-af3b-18b270a65f31', eventCount=8} | |
2017-05-31 10:41:01,505 INFO [Timer-Driven Process Thread-2] c.t.activemq.SendJmsMessage Sending ActiveMQ message [AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:40:48.110Z, maxTime=2017-05-31T10:40:48.705Z, collectionId='b3b513c2-a2ac-47d3-af3b-18b270a65f31', eventCount=8}] to queue [thinkbig.provenance-event-stats] | |
2017-05-31 10:41:01,509 INFO [Timer-Driven Process Thread-2] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] ProvenanceEventPool: Pool Stats: Created:[22], Borrowed:[95] | |
2017-05-31 10:41:01,509 INFO [Timer-Driven Process Thread-2] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: ReportingTask finished. Last Event id: 391. Total time to process 8 events was 88 ms. Total time spent querying for events in Nifi was 75 ms. Kylo ProcessingTime: 13 ms | |
2017-05-31 10:41:01,509 INFO [Timer-Driven Process Thread-2] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Finished. Last Event Recorded was: 391 | |
2017-05-31 10:41:19,133 INFO [Provenance Maintenance Thread-1] o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers for events starting with ID 395 | |
2017-05-31 10:41:19,161 INFO [Provenance Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully merged 16 journal files (3 records) into single Provenance Log File /opt/nifi/data/provenance_repository/392.prov in 28 milliseconds | |
2017-05-31 10:41:19,161 INFO [Provenance Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully Rolled over Provenance Event file containing 190899 records | |
2017-05-31 10:41:19,235 INFO [Timer-Driven Process Thread-6] c.t.nifi.v2.common.FeedProcessor Resolving ID for feed test/users | |
2017-05-31 10:41:19,304 INFO [Timer-Driven Process Thread-6] c.t.nifi.v2.common.FeedProcessor Resolving id 90eb8a5b-13af-48c7-b533-b08968a3b472 for feed test/users | |
2017-05-31 10:41:19,540 INFO [Timer-Driven Process Thread-2] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Adding to class path '/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar' | |
2017-05-31 10:41:19,546 INFO [Timer-Driven Process Thread-2] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Waiting for Spark job to complete | |
2017-05-31 10:41:20,197 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:20 INFO Validator: Running Spark Validator with the following command line args (comma separated):test,users,1496227249168,/tmp/kylo-nifi/spark/test/users/1496227249168/users_field_policy.json | |
2017-05-31 10:41:20,442 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:20 INFO AnnotationConfigApplicationContext: Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@66441260: startup date [Wed May 31 10:41:20 UTC 2017]; root of context hierarchy | |
2017-05-31 10:41:20,509 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:20 INFO AutowiredAnnotationBeanPostProcessor: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring | |
2017-05-31 10:41:20,651 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:20 INFO SparkContext: Running Spark version 1.6.2 | |
2017-05-31 10:41:21,120 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:21 INFO SecurityManager: Changing view acls to: nifi | |
2017-05-31 10:41:21,120 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:21 INFO SecurityManager: Changing modify acls to: nifi | |
2017-05-31 10:41:21,121 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:21 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(nifi); users with modify permissions: Set(nifi) | |
2017-05-31 10:41:21,568 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:21 INFO Utils: Successfully started service 'sparkDriver' on port 40541. | |
2017-05-31 10:41:21,633 INFO [Timer-Driven Process Thread-6] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: KyloFlowCache Sync Id: 32113153-68e9-47fa-aa30-da5dcd86d02a . Attempting to process 3 events starting with event id: 392. Splitting into 1 batches of 500 each | |
2017-05-31 10:41:21,658 INFO [Timer-Driven Process Thread-6] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING Events to JMS ProvenanceEventRecordDTOHolder{events=3} | |
2017-05-31 10:41:21,658 INFO [Timer-Driven Process Thread-6] c.t.activemq.SendJmsMessage Sending ActiveMQ message [ProvenanceEventRecordDTOHolder{events=3}] to queue [thinkbig.feed-manager] | |
2017-05-31 10:41:21,891 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:21 INFO Slf4jLogger: Slf4jLogger started | |
2017-05-31 10:41:21,927 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:21 INFO Remoting: Starting remoting | |
2017-05-31 10:41:22,048 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:33716] | |
2017-05-31 10:41:22,052 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 33716. | |
2017-05-31 10:41:22,062 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO SparkEnv: Registering MapOutputTracker | |
2017-05-31 10:41:22,078 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO SparkEnv: Registering BlockManagerMaster | |
2017-05-31 10:41:22,089 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-28ec73c0-c23b-46db-9921-9383285e36ba | |
2017-05-31 10:41:22,104 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO MemoryStore: MemoryStore started with capacity 511.5 MB | |
2017-05-31 10:41:22,160 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO SparkEnv: Registering OutputCommitCoordinator | |
2017-05-31 10:41:22,277 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO Server: jetty-8.y.z-SNAPSHOT | |
2017-05-31 10:41:22,307 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO AbstractConnector: Started [email protected]:4040 | |
2017-05-31 10:41:22,308 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO Utils: Successfully started service 'SparkUI' on port 4040. | |
2017-05-31 10:41:22,310 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://172.31.47.159:4040 | |
2017-05-31 10:41:22,336 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO HttpFileServer: HTTP File server directory is /tmp/spark-3115710e-1a2f-4fb3-a492-b0c8525fb207/httpd-570d810c-60f5-4ca0-baa4-0a5ccf515820 | |
2017-05-31 10:41:22,338 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO HttpServer: Starting HTTP Server | |
2017-05-31 10:41:22,343 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO Server: jetty-8.y.z-SNAPSHOT | |
2017-05-31 10:41:22,345 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO AbstractConnector: Started [email protected]:33900 | |
2017-05-31 10:41:22,345 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO Utils: Successfully started service 'HTTP file server' on port 33900. | |
2017-05-31 10:41:22,356 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO SparkContext: Added JAR file:/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar at http://172.31.47.159:33900/jars/hive-hcatalog-core.jar with timestamp 1496227282356 | |
2017-05-31 10:41:22,377 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO SparkContext: Added JAR file:/opt/nifi/current/lib/app/kylo-spark-validate-cleanse-jar-with-dependencies.jar at http://172.31.47.159:33900/jars/kylo-spark-validate-cleanse-jar-with-dependencies.jar with timestamp 1496227282377 | |
2017-05-31 10:41:22,461 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO Executor: Starting executor ID driver on host localhost | |
2017-05-31 10:41:22,473 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41370. | |
2017-05-31 10:41:22,474 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO NettyBlockTransferService: Server created on 41370 | |
2017-05-31 10:41:22,475 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO BlockManagerMaster: Trying to register BlockManager | |
2017-05-31 10:41:22,477 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO BlockManagerMasterEndpoint: Registering block manager localhost:41370 with 511.5 MB RAM, BlockManagerId(driver, localhost, 41370) | |
2017-05-31 10:41:22,478 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:22 INFO BlockManagerMaster: Registered BlockManager | |
2017-05-31 10:41:24,206 INFO [Timer-Driven Process Thread-6] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING AGGREGATED STAT to JMS AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:40:49.166Z, maxTime=2017-05-31T10:40:49.168Z, collectionId='89e3b877-1cc3-4b32-b676-aa865af99c9f', eventCount=3} | |
2017-05-31 10:41:24,207 INFO [Timer-Driven Process Thread-6] c.t.activemq.SendJmsMessage Sending ActiveMQ message [AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:40:49.166Z, maxTime=2017-05-31T10:40:49.168Z, collectionId='89e3b877-1cc3-4b32-b676-aa865af99c9f', eventCount=3}] to queue [thinkbig.provenance-event-stats] | |
2017-05-31 10:41:24,217 INFO [Timer-Driven Process Thread-6] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] ProvenanceEventPool: Pool Stats: Created:[22], Borrowed:[98] | |
2017-05-31 10:41:24,217 INFO [Timer-Driven Process Thread-6] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: ReportingTask finished. Last Event id: 394. Total time to process 3 events was 2584 ms. Total time spent querying for events in Nifi was 24 ms. Kylo ProcessingTime: 2560 ms | |
2017-05-31 10:41:24,217 INFO [Timer-Driven Process Thread-6] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Finished. Last Event Recorded was: 394 | |
2017-05-31 10:41:24,219 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:24 INFO EventLoggingListener: Logging events to hdfs:///spark-history/local-1496227282436 | |
2017-05-31 10:41:24,894 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:24 INFO HiveContext: Initializing execution hive, version 1.2.1 | |
2017-05-31 10:41:24,934 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:24 INFO ClientWrapper: Inspected Hadoop version: 2.7.3.2.5.3.0-37 | |
2017-05-31 10:41:24,935 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:24 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.7.3.2.5.3.0-37 | |
2017-05-31 10:41:25,209 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:25 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
2017-05-31 10:41:25,230 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:25 INFO ObjectStore: ObjectStore, initialize called | |
2017-05-31 10:41:25,378 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:25 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
2017-05-31 10:41:25,378 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:25 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
2017-05-31 10:41:27,058 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:27 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
2017-05-31 10:41:27,683 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:27 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:41:27,683 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:27 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:41:28,942 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:28 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:41:28,942 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:28 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:41:29,192 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
2017-05-31 10:41:29,194 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ObjectStore: Initialized ObjectStore | |
2017-05-31 10:41:29,294 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
2017-05-31 10:41:29,405 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
2017-05-31 10:41:29,528 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO HiveMetaStore: Added admin role in metastore | |
2017-05-31 10:41:29,531 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO HiveMetaStore: Added public role in metastore | |
2017-05-31 10:41:29,608 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
2017-05-31 10:41:29,698 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO HiveMetaStore: 0: get_all_databases | |
2017-05-31 10:41:29,699 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO audit: ugi=nifi ip=unknown-ip-addr cmd=get_all_databases | |
2017-05-31 10:41:29,713 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
2017-05-31 10:41:29,713 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO audit: ugi=nifi ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
2017-05-31 10:41:29,714 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:41:29,905 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO SessionState: Created local directory: /tmp/3650bb3f-4f91-43fb-a908-dd609e29d2e8_resources | |
2017-05-31 10:41:29,911 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO SessionState: Created HDFS directory: /tmp/hive/nifi/3650bb3f-4f91-43fb-a908-dd609e29d2e8 | |
2017-05-31 10:41:29,913 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO SessionState: Created local directory: /tmp/nifi/3650bb3f-4f91-43fb-a908-dd609e29d2e8 | |
2017-05-31 10:41:29,916 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO SessionState: Created HDFS directory: /tmp/hive/nifi/3650bb3f-4f91-43fb-a908-dd609e29d2e8/_tmp_space.db | |
2017-05-31 10:41:29,919 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 ERROR Validator: Failed to perform validation | |
2017-05-31 10:41:29,919 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Permission denied | |
2017-05-31 10:41:29,919 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) | |
2017-05-31 10:41:29,919 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:204) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:225) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:215) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:458) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:272) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:271) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.Iterator$class.foreach(Iterator.scala:727) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.AbstractIterable.foreach(Iterable.scala:54) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:271) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at com.thinkbiganalytics.spark.datavalidator.Validator.doValidate(Validator.java:166) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at com.thinkbiganalytics.spark.datavalidator.Validator.main(Validator.java:141) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at java.lang.reflect.Method.invoke(Method.java:606) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Caused by: java.lang.RuntimeException: java.io.IOException: Permission denied | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:515) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] ... 25 more | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Caused by: java.io.IOException: Permission denied | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at java.io.UnixFileSystem.createFileExclusively(Native Method) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at java.io.File.createTempFile(File.java:2001) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:818) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513) | |
2017-05-31 10:41:29,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] ... 25 more | |
2017-05-31 10:41:29,921 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO SparkContext: Invoking stop() from shutdown hook | |
2017-05-31 10:41:29,944 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/sql,null} | |
2017-05-31 10:41:29,944 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution/json,null} | |
2017-05-31 10:41:29,945 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution,null} | |
2017-05-31 10:41:29,945 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/json,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null} | |
2017-05-31 10:41:29,946 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null} | |
2017-05-31 10:41:29,947 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null} | |
2017-05-31 10:41:29,947 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null} | |
2017-05-31 10:41:29,947 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null} | |
2017-05-31 10:41:29,947 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null} | |
2017-05-31 10:41:29,947 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null} | |
2017-05-31 10:41:29,947 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null} | |
2017-05-31 10:41:29,947 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null} | |
2017-05-31 10:41:29,948 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null} | |
2017-05-31 10:41:29,948 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null} | |
2017-05-31 10:41:29,948 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null} | |
2017-05-31 10:41:29,948 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null} | |
2017-05-31 10:41:29,948 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null} | |
2017-05-31 10:41:29,948 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null} | |
2017-05-31 10:41:29,948 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null} | |
2017-05-31 10:41:29,948 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:29 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null} | |
2017-05-31 10:41:30,001 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO SparkUI: Stopped Spark web UI at http://172.31.47.159:4040 | |
2017-05-31 10:41:30,025 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
2017-05-31 10:41:30,028 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO MemoryStore: MemoryStore cleared | |
2017-05-31 10:41:30,029 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO BlockManager: BlockManager stopped | |
2017-05-31 10:41:30,033 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO BlockManagerMaster: BlockManagerMaster stopped | |
2017-05-31 10:41:30,035 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
2017-05-31 10:41:30,039 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO SparkContext: Successfully stopped SparkContext | |
2017-05-31 10:41:30,040 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO ShutdownHookManager: Shutdown hook called | |
2017-05-31 10:41:30,040 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO ShutdownHookManager: Deleting directory /tmp/spark-3115710e-1a2f-4fb3-a492-b0c8525fb207 | |
2017-05-31 10:41:30,040 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
2017-05-31 10:41:30,042 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
2017-05-31 10:41:30,043 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO ShutdownHookManager: Deleting directory /tmp/spark-7d864ccb-86be-4fd3-b0f9-e5ee0ad0605a | |
2017-05-31 10:41:30,047 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO ShutdownHookManager: Deleting directory /tmp/spark-3115710e-1a2f-4fb3-a492-b0c8525fb207/httpd-570d810c-60f5-4ca0-baa4-0a5ccf515820 | |
2017-05-31 10:41:30,062 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:41:30 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
2017-05-31 10:41:30,486 ERROR [Timer-Driven Process Thread-2] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] ExecuteSparkJob for Validate And Split Records and flowfile: StandardFlowFileRecord[uuid=5276b8c9-939b-4e33-b90b-ba20ed785131,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1496223247528-3, container=default, section=3], offset=1008492, length=143371],offset=0,name=userdata1.csv,size=143371] completed with failed status 1 | |
2017-05-31 10:41:56,540 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:41:56,576 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 35 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 15 millis), max Transaction ID 358 | |
2017-05-31 10:41:56,576 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 35 milliseconds | |
2017-05-31 10:41:58,448 INFO [FileSystemRepository Workers Thread-1] o.a.n.c.repository.FileSystemRepository Successfully destroyed 1 Resource Claims for Container default in 0 millis | |
2017-05-31 10:41:59,135 INFO [Provenance Maintenance Thread-1] o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers for events starting with ID 403 | |
2017-05-31 10:41:59,202 INFO [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Successfully merged 16 journal files (8 records) into single Provenance Log File /opt/nifi/data/provenance_repository/395.prov in 68 milliseconds | |
2017-05-31 10:41:59,202 INFO [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Successfully Rolled over Provenance Event file containing 953 records | |
2017-05-31 10:41:59,381 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: KyloFlowCache Sync Id: 32113153-68e9-47fa-aa30-da5dcd86d02a . Attempting to process 8 events starting with event id: 395. Splitting into 1 batches of 500 each | |
2017-05-31 10:41:59,403 INFO [Timer-Driven Process Thread-7] c.t.n.p.cache.FeedFlowFileCacheUtil Ending the Job for Feed test.users and flowfile: 5276b8c9-939b-4e33-b90b-ba20ed785131. Event: ProvenanceEventRecordDTO{eventId=402, processorName=Failed Flow, componentId=2b1f1ef0-4e08-1bd6-bef7-407e37852fde, flowFile=5276b8c9-939b-4e33-b90b-ba20ed785131, previous=null, eventType=DROP, eventDetails=Auto-Terminated by success Relationship, isEndOfJob=true, isBatch=true, isStream=false, feed=test.users} | |
2017-05-31 10:41:59,403 INFO [Timer-Driven Process Thread-7] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING Events to JMS ProvenanceEventRecordDTOHolder{events=8} | |
2017-05-31 10:41:59,403 INFO [Timer-Driven Process Thread-7] c.t.activemq.SendJmsMessage Sending ActiveMQ message [ProvenanceEventRecordDTOHolder{events=8}] to queue [thinkbig.feed-manager] | |
2017-05-31 10:41:59,408 INFO [Timer-Driven Process Thread-7] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING AGGREGATED STAT to JMS AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:41:19.345Z, maxTime=2017-05-31T10:41:30.494Z, collectionId='69b0e95f-5ad3-45e3-bb07-0c50a731c1b0', eventCount=8} | |
2017-05-31 10:41:59,408 INFO [Timer-Driven Process Thread-7] c.t.activemq.SendJmsMessage Sending ActiveMQ message [AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:41:19.345Z, maxTime=2017-05-31T10:41:30.494Z, collectionId='69b0e95f-5ad3-45e3-bb07-0c50a731c1b0', eventCount=8}] to queue [thinkbig.provenance-event-stats] | |
2017-05-31 10:41:59,411 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] ProvenanceEventPool: Pool Stats: Created:[22], Borrowed:[106] | |
2017-05-31 10:41:59,411 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: ReportingTask finished. Last Event id: 402. Total time to process 8 events was 30 ms. Total time spent querying for events in Nifi was 21 ms. Kylo ProcessingTime: 9 ms | |
2017-05-31 10:41:59,411 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Finished. Last Event Recorded was: 402 | |
2017-05-31 10:42:04,191 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache Time to expire 1 flowfile and all references 1 ms. FeedFlowFile and references left in cache: 0 | |
2017-05-31 10:42:48,067 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 14 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 649 | |
2017-05-31 10:43:44,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:43:44,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 106 statistics events to JMS. Sent 106 batch events to JMS | |
2017-05-31 10:43:56,576 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:43:56,611 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 34 milliseconds (Stop-the-world time = 12 milliseconds, Clear Edit Logs time = 15 millis), max Transaction ID 358 | |
2017-05-31 10:43:56,611 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 34 milliseconds | |
2017-05-31 10:44:48,072 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 649 | |
2017-05-31 10:45:56,611 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:45:56,645 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 34 milliseconds (Stop-the-world time = 12 milliseconds, Clear Edit Logs time = 15 millis), max Transaction ID 358 | |
2017-05-31 10:45:56,645 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 34 milliseconds | |
2017-05-31 10:46:48,085 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 12 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 649 | |
2017-05-31 10:47:56,645 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:47:56,682 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 12 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 358 | |
2017-05-31 10:47:56,682 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds | |
2017-05-31 10:48:48,090 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 649 | |
2017-05-31 10:48:54,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:48:54,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 106 statistics events to JMS. Sent 106 batch events to JMS | |
2017-05-31 10:48:54,222 INFO [Timer-Driven Process Thread-8] org.apache.hadoop.io.compress.CodecPool Got brand-new compressor [.bz2] | |
2017-05-31 10:48:54,801 INFO [Timer-Driven Process Thread-10] c.t.nifi.v2.common.FeedProcessor Resolving ID for feed test/users | |
2017-05-31 10:48:54,831 INFO [Timer-Driven Process Thread-10] c.t.nifi.v2.common.FeedProcessor Resolving id 90eb8a5b-13af-48c7-b533-b08968a3b472 for feed test/users | |
2017-05-31 10:48:55,444 INFO [Timer-Driven Process Thread-10] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Adding to class path '/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar' | |
2017-05-31 10:48:55,449 INFO [Timer-Driven Process Thread-10] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Waiting for Spark job to complete | |
2017-05-31 10:48:56,091 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:56 INFO Validator: Running Spark Validator with the following command line args (comma separated):test,users,1496227734791,/tmp/kylo-nifi/spark/test/users/1496227734791/users_field_policy.json | |
2017-05-31 10:48:56,340 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:56 INFO AnnotationConfigApplicationContext: Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@7f129113: startup date [Wed May 31 10:48:56 UTC 2017]; root of context hierarchy | |
2017-05-31 10:48:56,408 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:56 INFO AutowiredAnnotationBeanPostProcessor: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring | |
2017-05-31 10:48:56,546 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:56 INFO SparkContext: Running Spark version 1.6.2 | |
2017-05-31 10:48:57,000 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO SecurityManager: Changing view acls to: nifi | |
2017-05-31 10:48:57,001 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO SecurityManager: Changing modify acls to: nifi | |
2017-05-31 10:48:57,001 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(nifi); users with modify permissions: Set(nifi) | |
2017-05-31 10:48:57,445 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO Utils: Successfully started service 'sparkDriver' on port 38290. | |
2017-05-31 10:48:57,740 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO Slf4jLogger: Slf4jLogger started | |
2017-05-31 10:48:57,775 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO Remoting: Starting remoting | |
2017-05-31 10:48:57,899 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:38873] | |
2017-05-31 10:48:57,904 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 38873. | |
2017-05-31 10:48:57,914 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO SparkEnv: Registering MapOutputTracker | |
2017-05-31 10:48:57,929 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO SparkEnv: Registering BlockManagerMaster | |
2017-05-31 10:48:57,940 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-9a6de78e-8c18-40e4-b121-c70dd5ccbba5 | |
2017-05-31 10:48:57,955 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:57 INFO MemoryStore: MemoryStore started with capacity 511.5 MB | |
2017-05-31 10:48:58,011 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO SparkEnv: Registering OutputCommitCoordinator | |
2017-05-31 10:48:58,125 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO Server: jetty-8.y.z-SNAPSHOT | |
2017-05-31 10:48:58,156 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO AbstractConnector: Started [email protected]:4040 | |
2017-05-31 10:48:58,157 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO Utils: Successfully started service 'SparkUI' on port 4040. | |
2017-05-31 10:48:58,158 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://172.31.47.159:4040 | |
2017-05-31 10:48:58,185 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO HttpFileServer: HTTP File server directory is /tmp/spark-0c6fddb9-d414-4828-a6f9-84a6e88165ee/httpd-1e68e202-fa1d-4e47-a44c-d0a4942d0155 | |
2017-05-31 10:48:58,187 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO HttpServer: Starting HTTP Server | |
2017-05-31 10:48:58,192 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO Server: jetty-8.y.z-SNAPSHOT | |
2017-05-31 10:48:58,194 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO AbstractConnector: Started [email protected]:33268 | |
2017-05-31 10:48:58,194 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO Utils: Successfully started service 'HTTP file server' on port 33268. | |
2017-05-31 10:48:58,206 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO SparkContext: Added JAR file:/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar at http://172.31.47.159:33268/jars/hive-hcatalog-core.jar with timestamp 1496227738205 | |
2017-05-31 10:48:58,226 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO SparkContext: Added JAR file:/opt/nifi/current/lib/app/kylo-spark-validate-cleanse-jar-with-dependencies.jar at http://172.31.47.159:33268/jars/kylo-spark-validate-cleanse-jar-with-dependencies.jar with timestamp 1496227738226 | |
2017-05-31 10:48:58,308 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO Executor: Starting executor ID driver on host localhost | |
2017-05-31 10:48:58,320 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35930. | |
2017-05-31 10:48:58,321 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO NettyBlockTransferService: Server created on 35930 | |
2017-05-31 10:48:58,322 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO BlockManagerMaster: Trying to register BlockManager | |
2017-05-31 10:48:58,324 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO BlockManagerMasterEndpoint: Registering block manager localhost:35930 with 511.5 MB RAM, BlockManagerId(driver, localhost, 35930) | |
2017-05-31 10:48:58,325 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:58 INFO BlockManagerMaster: Registered BlockManager | |
2017-05-31 10:48:59,131 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:59 INFO EventLoggingListener: Logging events to hdfs:///spark-history/local-1496227738283 | |
2017-05-31 10:48:59,149 INFO [Provenance Maintenance Thread-3] o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers for events starting with ID 419 | |
2017-05-31 10:48:59,165 INFO [Provenance Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully merged 16 journal files (16 records) into single Provenance Log File /opt/nifi/data/provenance_repository/403.prov in 17 milliseconds | |
2017-05-31 10:48:59,165 INFO [Provenance Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully Rolled over Provenance Event file containing 2222 records | |
2017-05-31 10:48:59,802 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:59 INFO HiveContext: Initializing execution hive, version 1.2.1 | |
2017-05-31 10:48:59,842 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:59 INFO ClientWrapper: Inspected Hadoop version: 2.7.3.2.5.3.0-37 | |
2017-05-31 10:48:59,842 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:48:59 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.7.3.2.5.3.0-37 | |
2017-05-31 10:49:00,111 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:00 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
2017-05-31 10:49:00,131 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:00 INFO ObjectStore: ObjectStore, initialize called | |
2017-05-31 10:49:00,276 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:00 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
2017-05-31 10:49:00,276 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:00 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
2017-05-31 10:49:01,100 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: KyloFlowCache Sync Id: 32113153-68e9-47fa-aa30-da5dcd86d02a . Attempting to process 16 events starting with event id: 403. Splitting into 1 batches of 500 each | |
2017-05-31 10:49:01,122 INFO [Timer-Driven Process Thread-8] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING Events to JMS ProvenanceEventRecordDTOHolder{events=16} | |
2017-05-31 10:49:01,122 INFO [Timer-Driven Process Thread-8] c.t.activemq.SendJmsMessage Sending ActiveMQ message [ProvenanceEventRecordDTOHolder{events=16}] to queue [thinkbig.feed-manager] | |
2017-05-31 10:49:01,138 INFO [Timer-Driven Process Thread-8] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING AGGREGATED STAT to JMS AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:48:54.143Z, maxTime=2017-05-31T10:48:55.442Z, collectionId='63312265-85ad-4dd6-9d3e-17f82d7f9191', eventCount=16} | |
2017-05-31 10:49:01,138 INFO [Timer-Driven Process Thread-8] c.t.activemq.SendJmsMessage Sending ActiveMQ message [AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:48:54.143Z, maxTime=2017-05-31T10:48:55.442Z, collectionId='63312265-85ad-4dd6-9d3e-17f82d7f9191', eventCount=16}] to queue [thinkbig.provenance-event-stats] | |
2017-05-31 10:49:01,155 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] ProvenanceEventPool: Pool Stats: Created:[22], Borrowed:[122] | |
2017-05-31 10:49:01,157 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: ReportingTask finished. Last Event id: 418. Total time to process 16 events was 57 ms. Total time spent querying for events in Nifi was 19 ms. Kylo ProcessingTime: 38 ms | |
2017-05-31 10:49:01,157 INFO [Timer-Driven Process Thread-8] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Finished. Last Event Recorded was: 418 | |
2017-05-31 10:49:02,241 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:02 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
2017-05-31 10:49:02,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:02 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:49:02,920 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:02 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:49:04,129 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:49:04,129 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:49:04,367 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
2017-05-31 10:49:04,368 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO ObjectStore: Initialized ObjectStore | |
2017-05-31 10:49:04,456 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
2017-05-31 10:49:04,561 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
2017-05-31 10:49:04,652 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO HiveMetaStore: Added admin role in metastore | |
2017-05-31 10:49:04,654 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO HiveMetaStore: Added public role in metastore | |
2017-05-31 10:49:04,724 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
2017-05-31 10:49:04,806 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO HiveMetaStore: 0: get_all_databases | |
2017-05-31 10:49:04,807 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO audit: ugi=nifi ip=unknown-ip-addr cmd=get_all_databases | |
2017-05-31 10:49:04,820 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
2017-05-31 10:49:04,821 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO audit: ugi=nifi ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
2017-05-31 10:49:04,822 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
2017-05-31 10:49:04,974 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO SessionState: Created local directory: /tmp/9a00120e-91f4-4350-b59e-d953059b4b92_resources | |
2017-05-31 10:49:04,982 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO SessionState: Created HDFS directory: /tmp/hive/nifi/9a00120e-91f4-4350-b59e-d953059b4b92 | |
2017-05-31 10:49:04,983 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO SessionState: Created local directory: /tmp/nifi/9a00120e-91f4-4350-b59e-d953059b4b92 | |
2017-05-31 10:49:04,986 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO SessionState: Created HDFS directory: /tmp/hive/nifi/9a00120e-91f4-4350-b59e-d953059b4b92/_tmp_space.db | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 ERROR Validator: Failed to perform validation | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Permission denied | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:204) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:225) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:215) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:458) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:272) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:271) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.Iterator$class.foreach(Iterator.scala:727) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at scala.collection.AbstractIterable.foreach(Iterable.scala:54) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:271) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at com.thinkbiganalytics.spark.datavalidator.Validator.doValidate(Validator.java:166) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at com.thinkbiganalytics.spark.datavalidator.Validator.main(Validator.java:141) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at java.lang.reflect.Method.invoke(Method.java:606) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Caused by: java.lang.RuntimeException: java.io.IOException: Permission denied | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:515) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] ... 25 more | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] Caused by: java.io.IOException: Permission denied | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at java.io.UnixFileSystem.createFileExclusively(Native Method) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at java.io.File.createTempFile(File.java:2001) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:818) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513) | |
2017-05-31 10:49:04,990 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] ... 25 more | |
2017-05-31 10:49:04,992 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:04 INFO SparkContext: Invoking stop() from shutdown hook | |
2017-05-31 10:49:05,016 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/sql,null} | |
2017-05-31 10:49:05,016 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution/json,null} | |
2017-05-31 10:49:05,016 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution,null} | |
2017-05-31 10:49:05,016 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/json,null} | |
2017-05-31 10:49:05,016 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL,null} | |
2017-05-31 10:49:05,017 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null} | |
2017-05-31 10:49:05,017 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null} | |
2017-05-31 10:49:05,017 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null} | |
2017-05-31 10:49:05,017 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null} | |
2017-05-31 10:49:05,017 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null} | |
2017-05-31 10:49:05,017 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null} | |
2017-05-31 10:49:05,018 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null} | |
2017-05-31 10:49:05,018 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null} | |
2017-05-31 10:49:05,018 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null} | |
2017-05-31 10:49:05,018 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null} | |
2017-05-31 10:49:05,018 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null} | |
2017-05-31 10:49:05,019 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null} | |
2017-05-31 10:49:05,019 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null} | |
2017-05-31 10:49:05,019 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null} | |
2017-05-31 10:49:05,019 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null} | |
2017-05-31 10:49:05,019 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null} | |
2017-05-31 10:49:05,019 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null} | |
2017-05-31 10:49:05,020 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null} | |
2017-05-31 10:49:05,020 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null} | |
2017-05-31 10:49:05,020 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null} | |
2017-05-31 10:49:05,020 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null} | |
2017-05-31 10:49:05,020 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null} | |
2017-05-31 10:49:05,021 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null} | |
2017-05-31 10:49:05,021 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null} | |
2017-05-31 10:49:05,021 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null} | |
2017-05-31 10:49:05,074 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO SparkUI: Stopped Spark web UI at http://172.31.47.159:4040 | |
2017-05-31 10:49:05,108 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
2017-05-31 10:49:05,111 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO MemoryStore: MemoryStore cleared | |
2017-05-31 10:49:05,111 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO BlockManager: BlockManager stopped | |
2017-05-31 10:49:05,116 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO BlockManagerMaster: BlockManagerMaster stopped | |
2017-05-31 10:49:05,118 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
2017-05-31 10:49:05,120 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO SparkContext: Successfully stopped SparkContext | |
2017-05-31 10:49:05,121 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ShutdownHookManager: Shutdown hook called | |
2017-05-31 10:49:05,121 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-0c6fddb9-d414-4828-a6f9-84a6e88165ee | |
2017-05-31 10:49:05,123 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
2017-05-31 10:49:05,125 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
2017-05-31 10:49:05,125 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-0c6fddb9-d414-4828-a6f9-84a6e88165ee/httpd-1e68e202-fa1d-4e47-a44c-d0a4942d0155 | |
2017-05-31 10:49:05,125 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-94ae9632-5e47-4fce-b062-ba92e58871e4 | |
2017-05-31 10:49:05,150 INFO [stream error] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] 17/05/31 10:49:05 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
2017-05-31 10:49:05,567 ERROR [Timer-Driven Process Thread-10] c.t.nifi.v2.spark.ExecuteSparkJob ExecuteSparkJob[id=2b1f1ef0-4e06-1bd6-4157-92819b4c5ca9] ExecuteSparkJob for Validate And Split Records and flowfile: StandardFlowFileRecord[uuid=fc7f8e3e-63f6-449a-999c-94d2d1623150,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1496227734143-4, container=default, section=4], offset=143480, length=143371],offset=0,name=userdata1.csv,size=143371] completed with failed status 1 | |
2017-05-31 10:49:39,156 INFO [Provenance Maintenance Thread-1] o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers for events starting with ID 422 | |
2017-05-31 10:49:39,169 INFO [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Successfully merged 16 journal files (3 records) into single Provenance Log File /opt/nifi/data/provenance_repository/419.prov in 13 milliseconds | |
2017-05-31 10:49:39,169 INFO [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Successfully Rolled over Provenance Event file containing 214 records | |
2017-05-31 10:49:41,308 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: KyloFlowCache Sync Id: 32113153-68e9-47fa-aa30-da5dcd86d02a . Attempting to process 3 events starting with event id: 419. Splitting into 1 batches of 500 each | |
2017-05-31 10:49:41,328 INFO [Timer-Driven Process Thread-7] c.t.n.p.cache.FeedFlowFileCacheUtil Ending the Job for Feed test.users and flowfile: fc7f8e3e-63f6-449a-999c-94d2d1623150. Event: ProvenanceEventRecordDTO{eventId=421, processorName=Failed Flow, componentId=2b1f1ef0-4e08-1bd6-bef7-407e37852fde, flowFile=fc7f8e3e-63f6-449a-999c-94d2d1623150, previous=null, eventType=DROP, eventDetails=Auto-Terminated by success Relationship, isEndOfJob=true, isBatch=true, isStream=false, feed=test.users} | |
2017-05-31 10:49:41,328 INFO [Timer-Driven Process Thread-7] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING Events to JMS ProvenanceEventRecordDTOHolder{events=3} | |
2017-05-31 10:49:41,328 INFO [Timer-Driven Process Thread-7] c.t.activemq.SendJmsMessage Sending ActiveMQ message [ProvenanceEventRecordDTOHolder{events=3}] to queue [thinkbig.feed-manager] | |
2017-05-31 10:49:41,333 INFO [Timer-Driven Process Thread-7] c.t.n.p.j.ProvenanceEventActiveMqWriter SENDING AGGREGATED STAT to JMS AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:49:05.567Z, maxTime=2017-05-31T10:49:05.572Z, collectionId='1fee3526-a088-4c4f-be18-bdd255dee4b2', eventCount=3} | |
2017-05-31 10:49:41,333 INFO [Timer-Driven Process Thread-7] c.t.activemq.SendJmsMessage Sending ActiveMQ message [AggregatedFeedProcessorStatisticsHolder{minTime=2017-05-31T10:49:05.567Z, maxTime=2017-05-31T10:49:05.572Z, collectionId='1fee3526-a088-4c4f-be18-bdd255dee4b2', eventCount=3}] to queue [thinkbig.provenance-event-stats] | |
2017-05-31 10:49:41,336 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] ProvenanceEventPool: Pool Stats: Created:[22], Borrowed:[125] | |
2017-05-31 10:49:41,336 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] KyloProvenanceEventReportingTask onTrigger Info: ReportingTask finished. Last Event id: 421. Total time to process 3 events was 28 ms. Total time spent querying for events in Nifi was 18 ms. Kylo ProcessingTime: 10 ms | |
2017-05-31 10:49:41,336 INFO [Timer-Driven Process Thread-7] c.t.n.p.r.KyloProvenanceEventReportingTask KyloProvenanceEventReportingTask[id=ca61b139-015a-1000-139d-b758468c5b40] Reporting Task Finished. Last Event Recorded was: 421 | |
2017-05-31 10:49:44,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache Time to expire 1 flowfile and all references 0 ms. FeedFlowFile and references left in cache: 0 | |
2017-05-31 10:49:56,682 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:49:56,720 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 37 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 377 | |
2017-05-31 10:49:56,720 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 37 milliseconds | |
2017-05-31 10:50:48,096 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 655 | |
2017-05-31 10:51:56,720 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:51:56,763 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 43 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 23 millis), max Transaction ID 377 | |
2017-05-31 10:51:56,763 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 43 milliseconds | |
2017-05-31 10:52:48,101 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 5 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 655 | |
2017-05-31 10:53:56,763 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:53:56,800 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 377 | |
2017-05-31 10:53:56,800 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds | |
2017-05-31 10:54:04,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:54:04,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 125 statistics events to JMS. Sent 125 batch events to JMS | |
2017-05-31 10:54:48,117 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 15 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 655 | |
2017-05-31 10:55:56,801 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:55:56,837 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 377 | |
2017-05-31 10:55:56,837 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds | |
2017-05-31 10:56:48,128 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 10 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 655 | |
2017-05-31 10:57:56,838 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:57:56,884 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 46 milliseconds (Stop-the-world time = 12 milliseconds, Clear Edit Logs time = 17 millis), max Transaction ID 377 | |
2017-05-31 10:57:56,884 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 46 milliseconds | |
2017-05-31 10:58:48,133 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@76f518a9 checkpointed with 185 Records and 0 Swap Files in 4 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 1 millis), max Transaction ID 655 | |
2017-05-31 10:59:14,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache FeedFlowFile Cache Size: 0 | |
2017-05-31 10:59:14,190 INFO [pool-22-thread-1] c.t.n.p.cache.FeedFlowFileGuavaCache ProvenanceEvent JMS Stats: Sent 125 statistics events to JMS. Sent 125 batch events to JMS | |
2017-05-31 10:59:56,884 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository | |
2017-05-31 10:59:56,921 INFO [pool-8-thread-1] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog@607b1e81 checkpointed with 28 Records and 0 Swap Files in 36 milliseconds (Stop-the-world time = 13 milliseconds, Clear Edit Logs time = 16 millis), max Transaction ID 377 | |
2017-05-31 10:59:56,921 INFO [pool-8-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 36 milliseconds |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment