小编ajd*_*574的帖子

Python和OpenMP C扩展

我有一个C扩展,我想使用OpenMP.但是,当我导入我的模块时,出现导入错误:


ImportError: /home/.../_entropysplit.so: undefined symbol: GOMP_parallel_end
Run Code Online (Sandbox Code Playgroud)

我用-fopenmp和-lgomp编译了模块.这是因为我的Python安装没有用-fopenmp标志编译吗?我是否必须从源代码构建Python?还是有其他可能性吗?这是我在模块中实际使用openmp的唯一时间:


unsigned int feature_index;
#pragma omp parallel for
for (feature_index = 0; feature_index < num_features; feature_index++) {
Run Code Online (Sandbox Code Playgroud)

如果可能的话,我想坚持使用openmp,因为它非常简单,并且在这种情况下并行化非常适合它.

编辑:我咬了一口气,用OpenMP支持重新编译了Python.我的模块现在完美运行,但这不是一个很好的解决方案.如果需要完全重新编译Python,我无法真正分发它.所以有人知道这方面的一些方法吗?或许ctypes会起作用吗?

解决了!这是一个简单的链接问题.(我为此重建了Python?!)在编译模块期间,OpenMP没有正确链接.因此,IS可以加载使用OpenMP的一个C Python扩展.

python parallel-processing openmp python-c-extension

29
推荐指数
2
解决办法
6004
查看次数

Hadoop:中间合并失败

我遇到了一个奇怪的问题.当我在大型数据集(> 1TB压缩文本文件)上运行Hadoop作业时,一些reduce任务失败,堆栈跟踪如下:

java.io.IOException: Task: attempt_201104061411_0002_r_000044_0 - The reduce copier failed
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:240)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.mapred.Child.main(Child.java:234)
Caused by: java.io.IOException: Intermediate merge failed
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2714)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2639)
Caused by: java.lang.RuntimeException: java.io.EOFException
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128)
    at org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373)
    at org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
    at org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
    at org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:335)
    at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:350)
    at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:156)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2698)
    ... 1 more
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at com.__.hadoop.pixel.segments.IpCookieCountFilter$IpAndIpCookieCount.readFields(IpCookieCountFilter.java:241)
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:125)
    ... 8 more
Run Code Online (Sandbox Code Playgroud)
java.io.IOException: Task: attempt_201104061411_0002_r_000056_0 - The reduce copier failed
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:240) …
Run Code Online (Sandbox Code Playgroud)

hadoop mapreduce cloudera

7
推荐指数
1
解决办法
3805
查看次数

Python multiprocessing.Queue在put和get上死锁

这段代码我遇到了死锁问题:


def _entropy_split_parallel(data_train, answers_train, weights):
    CPUS = 1 #multiprocessing.cpu_count()
    NUMBER_TASKS = len(data_train[0])
    processes = []

    multi_list = zip(data_train, answers_train, weights)

    task_queue = multiprocessing.Queue()
    done_queue = multiprocessing.Queue()

    for feature_index in xrange(NUMBER_TASKS):
        task_queue.put(feature_index)

    for i in xrange(CPUS):
        process = multiprocessing.Process(target=_worker, 
                args=(multi_list, task_queue, done_queue))
        processes.append(process)
        process.start()

    min_entropy = None
    best_feature = None
    best_split = None
    for i in xrange(NUMBER_TASKS):
        entropy, feature, split = done_queue.get()
        if (entropy < min_entropy or min_entropy == None) and entropy != None:
            best_feature = feature
            best_split = split

    for …
Run Code Online (Sandbox Code Playgroud)

python queue concurrency deadlock multiprocessing

6
推荐指数
1
解决办法
9381
查看次数

Spring Security,表单登录和并发会话

我试图限制用户多次签名(强制前一个会话到期).

我检查了这个专题的文件在这里.我将其设置为非常类似于文档,但用户一次不限于一个会话.我可以使用同一个用户多次登录(在不同的浏览器中)并且有多个并发会话.

以下是我认为是我的安全设置的相关内容.我正在使用自定义UserDetailsS​​ervice,UserDetails和AuthenticationFilter实现.


    <http entry-point-ref="authenticationEntryPoint">
        <!-- Make sure everyone can access the login page -->
        <intercept-url pattern="/login.do*" filters="none" />

        [...]

        <custom-filter position="CONCURRENT_SESSION_FILTER" ref="concurrencyFilter" />
        <custom-filter position="FORM_LOGIN_FILTER" ref="authenticationFilter" />

        <logout logout-url="/logout" logout-success-url="/login.do" />
    </http>

    <authentication-manager alias="authenticationManager">
        <authentication-provider user-service-ref="userDetailsService">
            <password-encoder hash="sha" />
        </authentication-provider>
    </authentication-manager>

    <beans:bean id="userDetailsService" class="[...]">
        <beans:property name="userManager" ref="userManager" />
    </beans:bean>

    <beans:bean id="authenticationFilter" class="[...]">
        <beans:property name="authenticationManager" ref="authenticationManager" />
        <beans:property name="eventPublisher">
            <beans:bean
                class="org.springframework.security.authentication.DefaultAuthenticationEventPublisher" />
        </beans:property>
        <beans:property name="filterProcessesUrl" value="/security_check" />
        <beans:property name="authenticationFailureHandler">
            <beans:bean
                class="org.springframework.security.web.authentication.SimpleUrlAuthenticationFailureHandler">
                <beans:property name="defaultFailureUrl" value="/login.do?login_error=true" />
            </beans:bean>
        </beans:property>
        <beans:property …
Run Code Online (Sandbox Code Playgroud)

java authentication spring spring-security

4
推荐指数
1
解决办法
7076
查看次数