小编Joe*_*Urc的帖子

使用二进制数据使用AJAX从jsPDF上传PDF

我试图将使用jsPDF生成的前端javascript生成的PDF传递给Spring Framework MVC后端.以下是我写的前端代码:

var filename = "thefile";
var constructURL = '/daas-rest-services/dashboard/pdfPrintUpload/' + filename;
var url = restService.getUrl(constructURL);
var fileBytes = btoa(pdf.output());
$http.post(url, fileBytes).success(function(data) {
    console.log(data);
  })
  .error(function(e, a) {
    console.log(e);
    console.log(a);

  });
Run Code Online (Sandbox Code Playgroud)

pdf变量已正确生成,并且在调用pdf.save("filename")时可以确认打开.下面是为此调用编写的Spring MVC后端的Java代码:

@RequestMapping(method = RequestMethod.POST, value = "/pdfPrintUpload/{documentName}")
public @ResponseBody String postPrintDocument(@PathVariable String documentName, @RequestParam byte[] fileBytes) {
    String methodName = "postPrintDocument";
    if(logger.isLoggable(Level.FINER)){
        logger.entering(CLASS_NAME, methodName);               
    }
    String check;
    if(fileBytes != null){
        check = "not null";
    } else {
        check = "null ";
    }
    //Decoding the bytestream
    //Save to …
Run Code Online (Sandbox Code Playgroud)

javascript java ajax jquery spring

7
推荐指数
1
解决办法
1998
查看次数

Spark/Scala - 项目从IntelliJ运行良好,但抛出SBT错误

我有一个Spark项目,我在IntelliJ本地运行,当我从那里运行时工作正常.该项目非常简单,暂时只是一个玩具示例.以下是代码:

package mls.main


import org.apache.spark.SparkContext._
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{DataFrame, SQLContext}
import org.apache.spark.{SparkConf, SparkContext}
import java.nio.file.{Paths, Files}
import scala.io.Source


object Main {

  def main(args: Array[String]) {
    import org.apache.log4j.Logger
    import org.apache.log4j.Level
    print("HELLO WORLD!")
    Logger.getLogger("org").setLevel(Level.WARN)
    Logger.getLogger("akka").setLevel(Level.WARN)

    // fire up spark
    val sc = createContext
    val sqlContext = new SQLContext(sc)
    loadAHSData(List("x"),sqlContext)

  }

  def loadAHSData(years: List[String], sqlContext : SQLContext) : Unit = {
    // load the column names that exists in all 3 datasets
    val columns = sqlContext.sparkContext
      .textFile("data/common_columns.txt")
      .collect()
      .toSeq

    columns.foreach(println)
  }


  def createContext(appName: …
Run Code Online (Sandbox Code Playgroud)

scala intellij-idea sbt apache-spark

4
推荐指数
1
解决办法
709
查看次数

循环遍历数组的每个可能组合

我试图循环遍历C#中依赖于大小的数组的每个组合,但不是顺序.例如:var states = ["NJ", "AK", "NY"];

一些组合可能是:

states = [];
states = ["NJ"];
states = ["NJ","NY"];
states = ["NY"];
states = ["NJ", "NY", "AK"];
Run Code Online (Sandbox Code Playgroud)

......在我的情况下也是如此,states = ["NJ","NY"]并且states = ["NY","NJ"]是相同的,因为顺序并不重要.

有没有人对最有效的方法有任何想法?

c# arrays

1
推荐指数
1
解决办法
1274
查看次数

标签 统计

ajax ×1

apache-spark ×1

arrays ×1

c# ×1

intellij-idea ×1

java ×1

javascript ×1

jquery ×1

sbt ×1

scala ×1

spring ×1