FAQ

  • How can I run the code and local environment in an arm architecture?

    We provide a lib and servertest in build_for_apple-0.3.tar.gz, which are compiled on Ubuntu 20.04 for arm64 using Apple macbook pro with M2 chip. Note that the provided lib is only compatible with MPD SDK version 0.3. Build with an earlier version may cause downloading failure. You need to copy the files into appropriate locations to replace the corresponding files compiled for x86 platform in the MPD project in order to compile and run tests on Apple silicon. You are recommended to follow the README provided in build_for_apple-0.3.tar.gz for more information.

build_for_apple-0.3.tar.gz

  • Why does it take forever for us to receive the test results?

    We apologize for the delay in returning test results. The testing process takes way more effort than expected. The MPD SDK we have provided for mininet deployment is different from the version we use for the online Android based environment. In order to run one test, we have to checkout the submitted code, run security checks, move the algorithm source code to an Android project, compile with the MPD Android SDK to generate the apk, deploy the apk on the test phone and run the test scripts. It turned out the whole process cannot be effectively automated and requires manual processing in most steps at this moment. In some extreme cases (the submitted code fails to download the whole file), each test run may take several hours. We do not have the labor resources to properly handle all the test requests and return the results within the promised time frame.

    To speed up the testing process, we are working on a set of mininet topologies as well as testing sequences to create a local test platform based on mininet. For all the future test requests, we will first return the results of test locally. We will continue running tests on the online test platform as our capacity allows but we will prioritize the teams that have not run online test before. We will also create a separate scoreboard on our website for the mininet test platform.

  • I have noticed the scoreboard has been updated and the scores look very different from the previous version. What is the difference?

    Previously, we used the online platform to run all tests and the scoreboard results were based on the average score of running 10 downloading tasks. However, we found out that the test score couldn’t very well indicate the final performance, since the test only runs on one device to download only 10 files. Those scores from different tests couldn’t be directly compared since the tests ran at different time of the day could easily be affected by the background traffic. Moreover, we explained in the previous question that we were not able to keep up with the growing test demands.

    Therefore, we switched to a new method of testing. We test every submission with the same mininet topology, which we believe best simulates the networking environment of most online cases. This new method allows us to quickly generate stable and comparable scores. You can also use the same topology to debug and improve your algorithm.

    However, please be aware that the final evaluation will still be based on the results of the online platform. We will post details of how we run the final tests soon. Each participating team should submit your code for an online test at least once, especially if you haven’t done so, to make sure your code is compatible with the online platform.

 Comments