PDF text extraction transform failing

One of my transforms for the ‘PDF text extraction’ transform in Builder is failing with a 500 error. The preview for the transform before this one is working fine. Screenshot and stack trace below. How do I resolve this?

Query failed to complete successfully: {jobId=5ae00cde-90de-40ce-8ff6-e21a233c31c5, errorCodeName=INTERNAL, errorInstanceId=, errorCode=500, errorName=Default:Internal, causeMessage=org.apache.spark.SparkException: [INTERNAL_ERROR] The Spark SQL phase optimization failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
Stacktrace:
java.lang.RuntimeException: org.apache.spark.SparkException: [INTERNAL_ERROR] The Spark SQL phase optimization failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
	at com.palantir.sparkreporter.tagging.SparkReporterTaggingUtils.runWithSparkReporterProperties(SparkReporterTaggingUtils.java:52)
	at com.palantir.eddie.functions.compute.spark.module.preview.FoundryPreviewManager.lambda$wrapWithSparkReporterProperties$7(FoundryPreviewManager.java:246)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
	at com.palantir.eddie.functions.compute.spark.module.preview.FoundryPreviewManager.lambda$run$3(FoundryPreviewManager.java:214)
	at com.palantir.foundry.spark.api.SparkAuthorization.runAsUser(SparkAuthorization.java:70)
	at com.palantir.eddie.functions.compute.spark.module.preview.FoundryPreviewManager.run(FoundryPreviewManager.java:225)
	at com.palantir.eddie.functions.compute.spark.module.preview.PreviewQueryEvaluator.execute(PreviewQueryEvaluator.java:166)
	at com.palantir.eddie.functions.compute.spark.module.preview.PreviewQueryEvaluator.lambda$apply$0(PreviewQueryEvaluator.java:108)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
	at com.palantir.eddie.functions.compute.spark.module.checkpoint.DefaultFoundryCheckpointManagerWrapper.run(DefaultFoundryCheckpointManagerWrapper.java:44)
	at com.palantir.eddie.functions.compute.spark.module.preview.PreviewQueryEvaluator.apply(PreviewQueryEvaluator.java:102)
	at com.palantir.eddie.functions.compute.spark.module.preview.PreviewQueryEvaluator.apply(PreviewQueryEvaluator.java:78)
	at com.palantir.eddie.functions.compute.spark.module.preview.PreviewQueryEvaluator.apply(PreviewQueryEvaluator.java:51)
	at com.palantir.interactive.module.tasks.queries.QueryRunner.runBlockingUnmapped(QueryRunner.java:102)
	at com.palantir.interactive.module.tasks.queries.QueryRunner.runBlocking(QueryRunner.java:98)
	at com.palantir.interactive.module.tasks.InteractiveModuleResource.lambda$submitInternal$11(InteractiveModuleResource.java:337)
	at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
	at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:76)
	at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
	at com.palantir.tracing.Tracers$TracingAwareRunnable.run(Tracers.java:584)
	at com.palantir.tritium.metrics.TaggedMetricsExecutorService$TaggedMetricsRunnable.run(TaggedMetricsExecutorService.java:142)
	at org.jboss.threads.ContextHandler$1.runWith(ContextHandler.java:18)
	at org.jboss.threads.EnhancedQueueExecutor$Task.doRunWith(EnhancedQueueExecutor.java:2516)
	at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2495)
	at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1521)
	at com.palantir.tritium.metrics.TaggedMetricsThreadFactory$InstrumentedTask.run(TaggedMetricsThreadFactory.java:94)
	at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: org.apache.spark.SparkException: [INTERNAL_ERROR] The Spark SQL phase optimization failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
	at org.apache.spark.SparkException$.internalError(SparkException.scala:107)
	at org.apache.spark.sql.execution.QueryExecution$.toInternalError(QueryExecution.scala:536)
	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:548)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:219)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
	at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:218)
	at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:148)
	at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:144)
	at org.apache.spark.sql.execution.QueryExecution.assertOptimized(QueryExecution.scala:162)
	at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:182)
	at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:179)
	at org.apache.spark.sql.execution.QueryExecution.simpleString(QueryExecution.scala:238)
	at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:284)
	at org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:252)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:117)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4320)
	at org.apache.spark.sql.Dataset.collectAsList(Dataset.scala:3584)
	at com.palantir.eddie.functions.compute.spark.module.serialization.ResultSerializerV1.collectAndSerialize(ResultSerializerV1.java:107)
	at com.palantir.eddie.functions.compute.spark.module.serialization.PreviewTableCollector.collect(PreviewTableCollector.java:42)
	at com.palantir.eddie.functions.compute.spark.module.ResultHelperV2$1.visitTable(ResultHelperV2.java:50)
	at com.palantir.eddie.functions.compute.spark.module.ResultHelperV2$1.visitTable(ResultHelperV2.java:46)
	at com.palantir.eddie.functions.implementations.spark.SparkData$SparkTable.accept(SparkData.java:89)
	at com.palantir.eddie.functions.compute.spark.module.ResultHelperV2.collect(ResultHelperV2.java:46)
	at com.palantir.eddie.functions.compute.spark.module.common.ModuleComputeUtils.computeResult(ModuleComputeUtils.java:145)
	at com.palantir.eddie.functions.compute.spark.module.preview.DefaultPreviewComputer.lambda$run$13(DefaultPreviewComputer.java:290)
	at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:180)
	at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
	at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
	at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1939)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
	at com.palantir.eddie.functions.compute.spark.module.preview.DefaultPreviewComputer.run(DefaultPreviewComputer.java:288)
	at com.palantir.eddie.functions.compute.spark.module.preview.FoundryPreviewManager.lambda$run$2(FoundryPreviewManager.java:203)
	at com.palantir.sparkreporter.tagging.SparkReporterTaggingUtils.runWithSparkReporterProperties(SparkReporterTaggingUtils.java:49)
	... 26 more
Caused by: java.lang.NullPointerException: Cannot invoke "Object.getClass()" because "obj" is null
	at java.base/java.lang.reflect.Method.invoke(Method.java:572)
	at com.palantir.eddie.serialization.DefaultClientFactory$SerializableInvocationHandler.handleInvocation(DefaultClientFactory.java:166)
	at com.google.common.reflect.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:87)
	at jdk.proxy2/jdk.proxy2.$Proxy132.getMediaItemMetadata(Unknown Source)
	at com.palantir.eddie.services.FoundryMediaSetServiceProxy.getMediaItemMetadata(FoundryMediaSetServiceProxy.java:66)
	at com.palantir.eddie.media.PageOperatingPdfExtractor.confirmPdfAndResolvePageRange(PageOperatingPdfExtractor.java:143)
	at com.palantir.eddie.media.PageOperatingPdfExtractor.extract(PageOperatingPdfExtractor.java:107)
	at org.apache.spark.sql.catalyst.expressions.EddieNativeRawPdfTextExtract.nullSafeEval(EddieNativeRawPdfTextExtract.scala:45)
	at org.apache.spark.sql.catalyst.expressions.TernaryExpression.eval(Expression.scala:821)
	at org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala:158)
	at org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(InterpretedMutableProjection.scala:89)
	at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$$anonfun$apply$47.$anonfun$applyOrElse$82(Optimizer.scala:2161)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at scala.collection.TraversableLike.map(TraversableLike.scala:286)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
	at scala.collection.AbstractTraversable.map(Traversable.scala:108)
	at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$$anonfun$apply$47.applyOrElse(Optimizer.scala:2161)
	at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$$anonfun$apply$47.applyOrElse(Optimizer.scala:2156)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$3(TreeNode.scala:466)
	at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1215)
	at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1214)
	at org.apache.spark.sql.catalyst.plans.logical.LocalLimit.mapChildren(basicLogicalOperators.scala:1608)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:466)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$3(TreeNode.scala:466)
	at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1215)
	at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1214)
	at org.apache.spark.sql.catalyst.plans.logical.GlobalLimit.mapChildren(basicLogicalOperators.scala:1587)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:466)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformWithPruning(TreeNode.scala:427)
	at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$.apply(Optimizer.scala:2156)
	at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$.apply(Optimizer.scala:2154)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:222)
	at scala.collection.IndexedSeqOptimized.foldLeft(IndexedSeqOptimized.scala:60)
	at scala.collection.IndexedSeqOptimized.foldLeft$(IndexedSeqOptimized.scala:68)
	at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:38)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:219)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:211)
	at scala.collection.immutable.List.foreach(List.scala:431)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:211)
	at org.apache.spark.sql.FoundrySessionStateBuilder$$anon$2.super$execute(FoundrySessionStateBuilder.scala:91)
	at org.apache.spark.sql.FoundrySessionStateBuilder$$anon$2.$anonfun$execute$2(FoundrySessionStateBuilder.scala:91)
	at com.palantir.foundry.spark.Tracing$.trace(Tracing.scala:13)
	at org.apache.spark.sql.FoundrySessionStateBuilder$$anon$2.execute(FoundrySessionStateBuilder.scala:91)
	at org.apache.spark.sql.FoundrySessionStateBuilder$$anon$2.execute(FoundrySessionStateBuilder.scala:88)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:182)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:89)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:182)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:152)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:219)
	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:546)
	... 64 more
, jobId=5ae00cde-90de-40ce-8ff6-e21a233c31c5}

Hey! Just to double check are the PDFs in a mediaset? (You can only do text extraction on media set media references)

@helenq Yes - the PDFs are in a media set.

Is there a chance of a null value in the mediaReference column? Do you get the same error if you filter mediaReference is not null first?

I see the same error even after filtering away any potential null mediaReferences.

This is very interesting. One issue I ran into with MediaSets is a PDF can be 0 bytes and reside in a dataset without issue, but then that caused my MediaSet build to fail. I don’t know how to check for this without an upstream dataset of all the PDFs.