scala - How can a parallel array be reused? -
I am trying to use a parallel archive of Scala to send some computations in parallel. Because there are so many input data, I am using volatile arrays to store data to avoid GC issues. This is the initial approach that I took:
// Reusable input data structure for Wal-InData = New Array [Array [int]] (Runtime.Getterant.weelprocessors * Chunksage) (i & Lt; - InputDataLabel up to 0) {InputData (i) = New Array [int] (array)} // Inputing the process (hamorInput ()) {// Read the input - Must be sequential! (Array_index.length) for {array & lt; -0.0} {array for array (index up to 0) {array (index) = deserializeFromExternalSource ()}} // data map in parallel / note that input data is not LongRuningProcess val results = (array & lt; - inputData.par) yield {longRunningProcess (array)} // should use the results - sequential and should be ordered as input (for results & lt; - results.toArray) {useResult (as a result)
Given that the inclusion of a parallel file
Array is safely back can be used (eg, modified and other should work as expected in parallel Arre
), the above snip. However, when it crashes with a memory error to run:
*** Error '`Error in Java: Double free or corruption (FastTop): & lt; Memory address & gt; ***
It is clearly related to the fact that parallel collection directly uses the array which was made from it; Perhaps it tries to liberate this array when it gets out of the scope. In any case, there is not an option to create a new array with every loop, again, due to lack of memory. Clearly a var parInputData = inputData.par
both inside and outside the while
loop leads to the same double free error.
I can not do a parallel collection of only Approach I joined a vector [vector [arte []]] This final approach works, but it is difficult to separate the input into different parts and add those parts deeper to another layer in parallel collection. Is there a way to allow Scala to reuse a temporary array for parallel operations? EDIT: Using parallel lines manually, the parallel vector solution above the parallel resolution reduces the parallel vector to approximately 50% by benchmarking. I am thinking that this is only the upper part of a better intangible or if this difference can be reduced by using parallel arrays instead of input data because it must be populated sequentially (when trying to assign a parallel version, I realized that the task was executed Was not in order). As the external data structure works for a relatively small input size (& lt; 1000000 input array) but the GC overhead on large input leads to an exception.
, with the external vector which used to be equal to the number of parallel threads I then manually populate each sub- vector with a part of the input data array and then perform a parallel map on the outer vector.
vector
s; This will inspire another advantage of using arrays vs vector
s.
It really does not make sense that your data should be divided into different parts, The majority of the parallel compile library is that it does this for you, and works better than using fixed chuck sizes. Apart from this, arrays of arrays on JVM are not like arrays of arrays in C, they are like arrays of signals of very small arrays, which makes them incompatible.
A more elegant way to solve this is to use the normal array
and use transform
to work on it . long-running process
must be changed to operate on one element at a time:
val arraySize = ??? Val inputData = array [int] (arraySize) val outputData = Array [ResultType] (arraySize), while for (hamorInput ()) {for arraySize (i & lt; -0} to inputData (i) = deserializeFromExternalSource () (I & lt; - array.size up to 0) on output data (i) = lambering process (input data (i)) output data. It uses only two large arrays, and any new array, ParArray.map
, ParArray.toArray
Code>, and Array.par
the new array in the original code assigned. We still have to use a certain arraySize
to ensure that we do not load more data into memory that we have a better solution Have to use, but they are not ready for production yet
Comments
Post a Comment