Class

org.apache.predictionio.controller

P2LAlgorithm

Related Doc: package controller

Permalink

abstract class P2LAlgorithm[PD, M, Q, P] extends BaseAlgorithm[PD, M, Q, P]

Base class of a parallel-to-local algorithm.

A parallel-to-local algorithm can be run in parallel on a cluster and produces a model that can fit within a single machine.

If your input query class requires custom JSON4S serialization, the most idiomatic way is to implement a trait that extends CustomQuerySerializer, and mix that into your algorithm class, instead of overriding querySerializer directly.

PD

Prepared data class.

M

Trained model class.

Q

Input query class.

P

Output prediction class.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. P2LAlgorithm
  2. BaseAlgorithm
  3. BaseQuerySerializer
  4. AbstractDoer
  5. Serializable
  6. Serializable
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new P2LAlgorithm()(implicit arg0: ClassTag[M], arg1: ClassTag[Q])

    Permalink

Abstract Value Members

  1. abstract def predict(model: M, query: Q): P

    Permalink

    Implement this method to produce a prediction from a query and trained model.

    Implement this method to produce a prediction from a query and trained model.

    model

    Trained model produced by train.

    query

    An input query.

    returns

    A prediction.

  2. abstract def train(sc: SparkContext, pd: PD): M

    Permalink

    Implement this method to produce a model from prepared data.

    Implement this method to produce a model from prepared data.

    pd

    Prepared data for model training.

    returns

    Trained model.

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def batchPredict(m: M, qs: RDD[(Long, Q)]): RDD[(Long, P)]

    Permalink

    This is a default implementation to perform batch prediction.

    This is a default implementation to perform batch prediction. Override this method for a custom implementation.

    m

    A model

    qs

    An RDD of index-query tuples. The index is used to keep track of predicted results with corresponding queries.

    returns

    Batch of predicted results

  6. def batchPredictBase(sc: SparkContext, bm: Any, qs: RDD[(Long, Q)]): RDD[(Long, P)]

    Permalink

    :: DeveloperApi :: Engine developers should not use this directly.

    :: DeveloperApi :: Engine developers should not use this directly. This is called by evaluation workflow to perform batch prediction.

    sc

    Spark context

    bm

    Model

    qs

    Batch of queries

    returns

    Batch of predicted results

    Definition Classes
    P2LAlgorithmBaseAlgorithm
  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  12. lazy val gsonTypeAdapterFactories: Seq[TypeAdapterFactory]

    Permalink

    :: DeveloperApi :: Serializer for Java query classes using Gson

    :: DeveloperApi :: Serializer for Java query classes using Gson

    Definition Classes
    BaseQuerySerializer
  13. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  14. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  15. def makePersistentModel(sc: SparkContext, modelId: String, algoParams: Params, bm: Any): Any

    Permalink

    :: DeveloperApi :: Engine developers should not use this directly (read on to see how parallel-to-local algorithm models are persisted).

    :: DeveloperApi :: Engine developers should not use this directly (read on to see how parallel-to-local algorithm models are persisted).

    Parallel-to-local algorithms produce local models. By default, models will be serialized and stored automatically. Engine developers can override this behavior by mixing the PersistentModel trait into the model class, and PredictionIO will call PersistentModel.save instead. If it returns true, a org.apache.predictionio.workflow.PersistentModelManifest will be returned so that during deployment, PredictionIO will use PersistentModelLoader to retrieve the model. Otherwise, Unit will be returned and the model will be re-trained on-the-fly.

    sc

    Spark context

    modelId

    Model ID

    algoParams

    Algorithm parameters that trained this model

    bm

    Model

    returns

    The model itself for automatic persistence, an instance of org.apache.predictionio.workflow.PersistentModelManifest for manual persistence, or Unit for re-training on deployment

    Definition Classes
    P2LAlgorithmBaseAlgorithm
    Annotations
    @DeveloperApi()
  16. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  17. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  18. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  19. def predictBase(bm: Any, q: Q): P

    Permalink

    :: DeveloperApi :: Engine developers should not use this directly.

    :: DeveloperApi :: Engine developers should not use this directly. Called by serving to perform a single prediction.

    bm

    Model

    q

    Query

    returns

    Predicted result

    Definition Classes
    P2LAlgorithmBaseAlgorithm
  20. def queryClass: Class[Q]

    Permalink

    :: DeveloperApi :: Obtains the type signature of query for this algorithm

    :: DeveloperApi :: Obtains the type signature of query for this algorithm

    returns

    Type signature of query

    Definition Classes
    BaseAlgorithm
  21. lazy val querySerializer: Formats

    Permalink

    :: DeveloperApi :: Serializer for Scala query classes using org.apache.predictionio.controller.Utils.json4sDefaultFormats

    :: DeveloperApi :: Serializer for Scala query classes using org.apache.predictionio.controller.Utils.json4sDefaultFormats

    Definition Classes
    BaseQuerySerializer
  22. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  23. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  24. def trainBase(sc: SparkContext, pd: PD): M

    Permalink

    :: DeveloperApi :: Engine developers should not use this directly.

    :: DeveloperApi :: Engine developers should not use this directly. This is called by workflow to train a model.

    sc

    Spark context

    pd

    Prepared data

    returns

    Trained model

    Definition Classes
    P2LAlgorithmBaseAlgorithm
  25. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from BaseAlgorithm[PD, M, Q, P]

Inherited from BaseQuerySerializer

Inherited from AbstractDoer

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped