playframework – Why does my docker image of my Play Framework for Scala app not start with an AccessDeniedException?

problem

My project is great with create sbt docker: publish or sbt docker: publishLocalbut when I go to run the image, it fails with the following stack trace:

eleanor @ demo-machine: ~ / workbench / opend / opendar $ docker run eholley / opend: 1.0-SNAPSHOT
Oops, the server can not start.
java.nio.file.AccessDeniedException: / opt / docker / RUNNING_PID
at sun.nio.fs.UnixException.translateToIOException (UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException (UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException (UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel (UnixFileSystemProvider.java:214)
at java.nio.file.spi.FileSystemProvider.newOutputStream (FileSystemProvider.java:434)
at java.nio.file.Files.newOutputStream (Files.java:216)
at play.core.server.ProdServerStart $ .createPidFile (ProdServerStart.scala: 136)
at play.core.server.ProdServerStart $ .start (ProdServerStart.scala: 43)
at play.core.server.ProdServerStart $ .main (ProdServerStart.scala: 25)
at play.core.server.ProdServerStart.main (ProdServerStart.scala)
eleanor @ demo-machine: ~ / workbench / opendar / opendar $ 

Reproduce

The image is public on DockerHub under eholley / opendar: 1.0-SNAPSHOT. (In the run command I have omitted some environment variables, so the expected output should be that the configuration based on the application.conf error occurs instead of the above error.)

If you want to try to create and package it yourself, you can clone https: //0x00F3@bitbucket.org/0x00F3/opendar.git.

What I tried

The problem is not dissimilar to this problem, so I tried to add it as a shot in the dark

import com.typesafe.sbt.packager.docker.DockerChmodType
dockerChmodType: = DockerChmodType.UserGroupWriteExecute

after the advice in the thread. It did not seem to change.

background

  • Ubuntu Version 18.04.2 LTS
  • Java version openjdk 1.8.0_191
  • sbt version 1.2.1
  • Scala Version 2.12.6
  • sbt-native-packager version 1.3.21
  • Play Framework Version 2.6.20

The IntelliJ Remote Debugger makes a connection, but breakpoints do not work for Scala Bridge

I'm not lucky with breakpoints after trying for more than a few hours, including installing the "xsbt-web-plugin" as per https://stackoverflow.com/questions/44232723/intellij-remote-debugger-connects -but Holdpoints do not work.

addSbtPlugin ("com.earldouglas"% "xsbt-web-plugin"% "4.0.2")

I did set debugPort to Jetty: = 5005 also followed by Jetty: debug

Screenshot of the remote debug configuration

localhost: 5005 -agentlib: jdwp = transport = dt_socket, server = y, suspend = n, address = 5005

I'm on Java 1.8, Scala 2.12.6, Scalatra 2.6.5, sbt 1.2.8, OS X 10.13.6, Intellij 2018.3 Community Edition

It would be very grateful if someone could give a more or less accurate configuration that was tested.

If it takes too long for this to work, I will probably do without a debugger, which of course I would rather not.

Many Thanks!

Streaming – Configure Spark as a kafka consumer problem SCALA

hello all here are my code I try to configure sparks as a kafka consumer, but I have the error exception
First problem is that the web UI binds to 0.0.0.0 or a native ip: 4040, which I can not find in the browser issue. I will write it in the lower section. Thanks for your help:

######################################## ####### ## #### "" ""

import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
object Teal extends App {
val spark = Spark Session
.builder ()
.master ("local")[[[[]")
.appName ("teal")
.getOrCreate ()
import spark.implicits._
val df = spark.readStream
.format ("kafka")
//.setMaster("local[[[[
]")
.option ("kafka.bootstrap.servers", "127.0.0.1:9090")
.option ("subscribe", "test")
.Burden()
df.selectExpr ("CAST (key AS STRING)", "CAST (value AS STRING)")
.as[(String, String)]
val query = df.writeStream
.outputMode ("complete")
.format ("console")
.Begin()
}

######################################## ####### ## ############### "

Problem is: Exception in thread "main" org.apache.spark.sql.AnalysisException: The data source could not be found: kafka. Deploy the application according to the staging section of the Structured Streaming + Kafka Integration Guide.
at org.apache.spark.sql.execution.datasources.DataSource $ .lookupDataSource (DataSource.scala: 652)
at org.apache.spark.sql.streaming.DataStreamReader.load (DataStreamReader.scala: 161)

File system – path normalization in Scala

I am writing a function to normalize a path "manually", i. H. Without use java.io or java.nio APIs.

def normalize (path: string): String = {
val arr = path.split (& # 39; / & # 39;
val list = arr.foldLeft (List.empty)[String]) {case (res, s) =>
s match {
Case "." => res
case ".." => res.drop (1)
case _ => s :: res
}
}
list.reverse.mkString ("/")
}

How would you improve the function? Comments on style and best practices are also appreciated.

scala – Apache Spark, Range Joins, Data Offset and Performance

I have the following Apache Spark SQL join predicate:

t1.field1 = t2.field1 and t2.start_date <= t1.event_date and t1.event_date <t2.end_date

Dates:

t1 DataFrame has over 50 million lines
T2 DataFrame has over 2 million lines

almost everything t1.field1 Fields in t1 DataFrame have the same value (zero).

At the moment, the Spark cluster hangs on a single task for more than 10 minutes to complete this join and due to data skew. At this time, only one worker and one job work for that worker. All other 9 workers are idle. How can this linkage be improved to spread the burden from this one task to the entire Spark cluster?

apache spark – How to concatenate strings in Scala, only if the input string is not already in the field?

The following Scala code works, but I just need to update this line:

"case StringType => concat_ws (", ", collect_list (col (c)))"

Append only strings that are not already in the existing field. In this example, the letter "b" would not be displayed twice.

val df = Seq (
(1, 1.0, true, "a"),
(2, 2.0, wrong, "b")
(3, 2.0, wrong, "b")
(3, 2.0, wrong, "c")
) .toDF ("id", "d", "b", "s")

val dataTypes: Map[String, DataType] = df.schema.map (sf =>
(sf.name, sf.dataType)). toMap

def genericAgg (c: String) = {
Data types (c) match {
case DoubleType => sum (col (c))
case StringType => concat_ws (",", collect_list (col (c)))
case BooleanType => max (col (c))
}
}

val aggExprs: Seq[Column] = df.columns.filterNot (_ == "id")
.map (c => genericAgg (c))

df
.groupBy ("id")
.agg (
aggExprs.head, aggExprs.tail: _ *
)
.Show()

scala – Is it possible to have an HMap of type A on several other types?

Has formless hMAPs to enforce the type-safety of heterogeneous maps, but it does not seem to allow mapping from one type to several types.

In other words:

Class BiMapIS[K, V]
implicit val stringToInt = new BiMapIS[String, Int]
implicit value intToString = new BiMapIS[Int, String]

val hm = HMap[BiMapIS](23 -> "foo", "bar" -> 13)

But that is not:

Class BiMapIS[K, V]
implicit val stringToInt = new BiMapIS[String, Int]
implicit val stringToString = new BiMapIS[String, String]

val hm = HMap[BiMapIS]("val1" -> 1, "val2" -> "two")

My question is: Is there a way to allow type-safe mappings of one type (eg. string) on several types (eg both string and Int)

Besides, I'm not married to Shapeless for this solution.

Scala – Is it a good idea to use "Lazy Val" for accuracy?

In Scala, a val as lazy means that the value will not be evaluated until it is used for the first time. This is often explained / demonstrated to be useful for optimization if a value is expensive to calculate but not needed at all.

It is also possible to use lazy in a way that the code works correctly and not just for efficiency. For example, consider a Lazy Val like this:

lazy val foo = someObject.getList (). find (pred) // do not use this until any object has filled its list!

If foo would not be lazy, then it would always contain nonebecause its value would be evaluated immediately before the list contained anything. Since it is lazy, it contains the right thing, as long as it is not evaluated before the list is filled.

My question: is it ok to use? lazy in places like this, where the code would be wrong by the absence, or should it be reserved for optimization only?

(Here is the code snippet from practice that led to this question.)

function – Problems converting Python code to Scala code

I have a function that takes a filename and two strings as parameters. The filename is a long list of dictionaries that determines whether the two strings interact. For the two strings to rhyme, the last vowel and every word after it must be the same.

I'm trying to convert my Python code to Scala. My Python code is executing properly, but my Scala code outputs all kinds of errors, such as: For example, values ​​for Table1 and Table2 that are not found. .Reverse is invalid, and filename can not be found.

Python code

def wordRhymeOrNot (filename, word0, word1):
with open (filename, & r & # 39;) as f:
table0 = {}
table1 = {}
table2 = {}
a = & # 39; & # 39;
q = & # 39; & # 39;
for me in f:
Table0[i.split(' ', 1)[0]]= & # 39; & # 39; .join (i.split ()[1:])
for y, x in table0.items ():
for you in x:
if not u.isdigit ():
a + = u
Table 1[y] = a
a = & # 39; & # 39;
for b, c in table1.items ():
for j in c[::-1]:
q + = j
If j in set (& # 39; AEIOU & # 39;):
z = q[::-1]
                    Table 2[b] = z
q = & # 39; & # 39;
break
if word0 is not in table2 or word1 is not in table2:
return []
        elif table2[word0] == table2[word1]:
hand back
otherwise:
return incorrectly

Scala code:

Package rhyme
import scala.io.Source
import scala.util.control._

Object rhyme {

def wordRhymeOrNot (filename: string, word0: string, word1: string) {
var a: String = ""
var q: String = ""
var z = list[String]()
var loop = new breaks
var map0: map[String,String] =
io.Source.fromFile (filename) // open file
.getLines () // reads the file line by line
.map (_. split ("\ s +")) // spit on spaces
.map (a => (a.head, a.tail.reduce (_ + _))) // Create a tuple
.toMap
for ((y, x) <- map0) {
to you <- x){
                b = u.toInt
                if (!(b)){
                    a = a + u
                }
                else{
                var table1=Map(y -> on)
}
}
a = ""
Table 1
}
for ((b, c) <- table1) {
loop.breakable {
var reversedC = c.reverse
for J <- reversedC){
                q = q + j
                var mainSet = Set("A","E","I","O","U")
                z = q.reverse
                if (j.exists(mainSet.contains(_)) ){
                        var table2=Map(b -> z)
q = ""
loop.break
}
}
}
Table 2
}
for ((o, p) <- table2) {
if (table2.getOrElse (word0, "No such value") == "No such value" && table2.getOrElse (word1, "No such value") == "No such value") {
array[String]()
}
else if (table2.getOrElse (word0, "no such value") == table2.getOrElse (word1, "no such value")) {
true
}
otherwise{
not correct
}
}
}
}

Scala macros: How can I get a list of objects that inherit a specific feature?

I have a package foo.bar in which a feature parent is defined and a set of objects Child1, child2, Kind3 are set. I would like to get one list[Parent] contain all child objects. How can I write such a macro?

At the moment I have the following:

        def myMacro (c: blackbox.Context): c.Expr[Set[RuleGroup]]= {
val parentSymbol = c.mirror.staticClass ("foo.bar.Parent")
c.mirror.staticPackage ("foo.bar"). info.members
// get all the objects
.filter {sym =>
// remove $ objects
sym.isModule && sym.asModule.moduleClass.asClass.baseClasses.contains (parentSymbol)
}.Map { ??? /* recall? * /}
???
}