| 24小時(shí)熱門(mén)版塊排行榜 |
| 6 | 1/1 | 返回列表 |
| 查看: 908 | 回復(fù): 5 | ||
| 【懸賞金幣】回答本帖問(wèn)題,作者agong將贈(zèng)送您 5 個(gè)金幣 | ||
[求助]
求助審稿意見(jiàn)的理解
|
||
|
字面意思看懂了,但是還請(qǐng)過(guò)來(lái)人看看,然后發(fā)表一些批評(píng)和建議。以及之后該怎么樣修改和下一步的投稿,謝謝大家。 發(fā)自小木蟲(chóng)IOS客戶端 |
新蟲(chóng) (小有名氣)

| 鏈接: https://pan.baidu.com/s/1jOUKi7Hp2Y76vfRuQMJ1Qg 提取碼: euty 復(fù)制這段內(nèi)容后打開(kāi)百度網(wǎng)盤(pán)手機(jī)App,操作更方便哦 |
|
Strong Aspects (Comments to the author: What are the strong aspects of the paper?) In this paper, the authors proposed an experience-based computational offloading with reinforcement learning in MEC network. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) 1. In (11), it seems that the discount factor is 1, while the discount factor is defined as [0,1] in (12). It is not very clear. 2. Some symbols are undefined, i.e., the immediate reward r_t, the symbol \wedge in (15) 3. There are some flaw in the presentation, i.e., double “the task” in section II-B, the action should be defined in lowercase. 4. In algorithm 1, the meaning of “undated” is not clear. 5. It is better to compare the proposed algorithm with DQN not DDPG. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) In this paper, the authors proposed an experience-based computational offloading with reinforcement learning in MEC network. The reviewer has the following comments. 1. In (11), it seems that the discount factor is 1, while the discount factor is defined as [0,1] in (12). It is not very clear. 2. Some symbols are undefined, i.e., the immediate reward r_t, the symbol \wedge in (15) 3. There are some flaw in the presentation, i.e., double “the task” in section II-B, the action should be defined in lowercase. 4. In algorithm 1, the meaning of “undated” is not clear. 5. It is better to compare the proposed algorithm with DQN not DDPG. |
|
Review 2 Relevance and Timeliness Technical Content and Scientific Rigour Novelty and Originality Quality of Presentation Good. (4) Solid work of notable importance. (4) Some interesting ideas and results on a subject well investigated. (3) Well written. (4) Strong Aspects (Comments to the author: What are the strong aspects of the paper?) The paper proposes an improved experience based replay reinforcement learning algorithm (EBRL) for computation offloading by using MEC. The energy consumption and delay can be minimized by using the proposed algorithm compared with other algorithms. The paper is well written. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) It is better to show more practical situation for performance comparison with considering realistic applications. Currently, only arrival rate is changed for considering different environment. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) Please see the weak aspects. It is better to consider more realistic and practical situation. Robustness for environment change is another key performance for MEC offloading. |
|
Review 1 Relevance and Timeliness Technical Content and Scientific Rigour Novelty and Originality Quality of Presentation Acceptable. (3) Valid work but limited contribution. (3) Some interesting ideas and results on a subject well investigated. (3) Readable, but revision is needed in some parts. (3) Strong Aspects (Comments to the author: What are the strong aspects of the paper?) This paper presents a new algorithm to offload edge tasks to edge serves within the MEC environment. Authors present an improved reinforcement learning framework according to dynamic environments, which selects samples from an improved experience pool. Simulation experiments reveal improved performance. Weak Aspects (Comments to the author: What are the weak aspects of the paper?) First, the distinction between the proposal and state-of-the-art reinforcement learning seems straightforward, as the selection of experience samples seems straightforward. Second, the simulation results are not discussed in details to explain the novelty of the proposal. Recommended Changes (Recommended changes. Please indicate any changes that should be made to the paper if accepted.) First, authors should explain the improvements. why experiences are vital to improve the performance of the reinforcement learning. And the selection of experience samples seems straightforward. Second, an example is favored to illustrate the workflow of the proposed algorithm. Third, the experiments do not present the detailed setup of the MEC environment, and the performance metrics. |
| 6 | 1/1 | 返回列表 |
| 最具人氣熱帖推薦 [查看全部] | 作者 | 回/看 | 最后發(fā)表 | |
|---|---|---|---|---|
|
[考研] 270調(diào)劑 +3 | maxjxbsk 2026-04-02 | 3/150 |
|
|---|---|---|---|---|
|
[考研] 265求調(diào)劑 +4 | 林深溫瀾 2026-04-01 | 6/300 |
|
|
[考研] 298求調(diào)劑 +4 | 什么是胖頭魚(yú) 2026-03-30 | 6/300 |
|
|
[考研] 材料與化工(0856)304求B區(qū)調(diào)劑 +8 | 邱gl 2026-03-30 | 16/800 |
|
|
[考研] 348求調(diào)劑 +5 | 小懶蟲(chóng)不懶了 2026-03-27 | 6/300 |
|
|
[考研] 材料調(diào)劑 +11 | 一樣YWY 2026-03-31 | 11/550 |
|
|
[考研] 土木304求調(diào)劑 +3 | 兔突突突, 2026-03-31 | 3/150 |
|
|
[教師之家] 張雪峰戛然而止的飛馳人生 +3 | yexuqing 2026-03-26 | 4/200 |
|
|
[考研] 311(085601)求調(diào)劑 +12 | liziyeyeye 2026-03-28 | 13/650 |
|
|
[考研] 070300化學(xué)354求調(diào)劑 +15 | 101次希望 2026-03-28 | 15/750 |
|
|
[考研] 289求調(diào)劑 +3 | Acesczlo 2026-03-29 | 4/200 |
|
|
[考研] 0703化學(xué) +20 | 妮妮ninicgb 2026-03-27 | 20/1000 |
|
|
[考研] 334求調(diào)劑 +7 | Trying] 2026-03-31 | 7/350 |
|
|
[考研] 081200-11408-276學(xué)碩求調(diào)劑 +4 | 崔wj 2026-03-31 | 4/200 |
|
|
[考研] 哈爾濱工業(yè)大學(xué)材料與化工專(zhuān)碩378求調(diào)劑 +3 | 塔比烏斯 2026-03-30 | 3/150 |
|
|
[考研] 抱歉 +4 | 田洪有 2026-03-30 | 4/200 |
|
|
[考研] 327求調(diào)劑 +5 | 小卡不卡. 2026-03-29 | 5/250 |
|
|
[考研] 一志愿雙一流機(jī)械285分求調(diào)劑 +4 | 幸運(yùn)的三木 2026-03-29 | 5/250 |
|
|
[考研] 298求調(diào)劑 +4 | 種圣賜 2026-03-28 | 4/200 |
|
|
[考研] 一志愿211院校 344分 東北農(nóng)業(yè)大學(xué)生物學(xué)學(xué)碩,求調(diào)劑 +5 | 丶風(fēng)雪夜歸人丶 2026-03-26 | 8/400 |
|