repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
patrickmcelwee/ml-visjs-graph-ng | 224253938 | Title: document use of rdf:type and rdfs:label
Question:
username_0: rdf:type maps to VisJS groups --> we could include an example of how to modify the presentation of the icon based on this.
rdfs:label changes the label of the node (for resources) or edge (for predicates) on the graph display. |
Zrips/CMI | 1104693894 | Title: Maxplayer command error
Question:
username_0: **Description of issue:**
Error Edit Maxplayer
---
**ERROR (DELETE IF YOU HAVE NO ERROR):**
```
[20:53:07 INFO]: username_0 issued server command: /maxplayers 50
[20:53:07 WARN]: java.lang.NoSuchMethodException: net.minecraft.server.dedicated.DedicatedServer.getPlayerList()
[20:53:07 WARN]: at java.base/java.lang.Class.getMethod(Class.java:2227)
[20:53:07 WARN]: at CMI9.1.0.6.jar//com.username_1.CMI.Reflections.changePlayerLimit(Reflections.java:258)
[20:53:07 WARN]: at CMI9.1.0.6.jar//com.username_1.CMI.commands.list.maxplayers.perform(maxplayers.java:34)
[20:53:07 WARN]: at CMI9.1.0.6.jar//com.username_1.CMI.commands.CommandsHandler.onCommand(CommandsHandler.java:396)
[20:53:07 WARN]: at org.bukkit.command.PluginCommand.execute(PluginCommand.java:45)
[20:53:07 WARN]: at org.bukkit.command.SimpleCommandMap.dispatch(SimpleCommandMap.java:172)
[20:53:07 WARN]: at org.bukkit.craftbukkit.v1_18_R1.CraftServer.dispatchCommand(CraftServer.java:897)
[20:53:07 WARN]: at net.minecraft.server.network.PlayerConnection.a(PlayerConnection.java:2368)
[20:53:07 WARN]: at net.minecraft.server.network.PlayerConnection.a(PlayerConnection.java:2179)
[20:53:07 WARN]: at net.minecraft.server.network.PlayerConnection.a(PlayerConnection.java:2160)
[20:53:07 WARN]: at net.minecraft.network.protocol.game.PacketPlayInChat.a(PacketPlayInChat.java:46)
[20:53:07 WARN]: at net.minecraft.network.protocol.game.PacketPlayInChat.a(PacketPlayInChat.java:6)
[20:53:07 WARN]: at net.minecraft.network.protocol.PlayerConnectionUtils.lambda$ensureRunningOnSameThread$1(PlayerConnectionUtils.java:56)
[20:53:07 WARN]: at net.minecraft.server.TickTask.run(TickTask.java:18)
[20:53:07 WARN]: at net.minecraft.util.thread.IAsyncTaskHandler.c(IAsyncTaskHandler.java:149)
[20:53:07 WARN]: at net.minecraft.util.thread.IAsyncTaskHandlerReentrant.c(IAsyncTaskHandlerReentrant.java:23)
[20:53:07 WARN]: at net.minecraft.server.MinecraftServer.b(MinecraftServer.java:1440)
[20:53:07 WARN]: at net.minecraft.server.MinecraftServer.c(MinecraftServer.java:189)
[20:53:07 WARN]: at net.minecraft.util.thread.IAsyncTaskHandler.y(IAsyncTaskHandler.java:122)
[20:53:07 WARN]: at net.minecraft.server.MinecraftServer.bf(MinecraftServer.java:1418)
[20:53:07 WARN]: at net.minecraft.server.MinecraftServer.y(MinecraftServer.java:1411)
[20:53:07 WARN]: at net.minecraft.util.thread.IAsyncTaskHandler.c(IAsyncTaskHandler.java:132)
[20:53:07 WARN]: at net.minecraft.server.MinecraftServer.x(MinecraftServer.java:1389)
[20:53:07 WARN]: at net.minecraft.server.MinecraftServer.w(MinecraftServer.java:1295)
[20:53:07 WARN]: at net.minecraft.server.MinecraftServer.lambda$spin$1(MinecraftServer.java:322)
[20:53:07 WARN]: at java.base/java.lang.Thread.run(Thread.java:833)
```
---
**Cmi Version (using`/cmi version`):**

**Server Type (Spigot/Paperspigot/etc):**
etc(Purpur)
**Server Version (using `/ver`):**

(Purpur Forked)
Status: Issue closed
Answers:
username_1: Fixed with 9.1.1.2 version |
sulu/sulu | 166279903 | Title: No labels appear when saving roles
Question:
username_0: | Q | A
| --- | ---
| Bug? | yes
| New Feature? | no
| Sulu Version | 1.2.7
| Browser Version | all
#### Actual Behavior
When changing and saving or creating a role. No labels appear which notify about the successful persistence of the role.
#### Expected Behavior
When changing or creating a role a success label should appear.
#### Steps to Reproduce
Change or create a role.
Status: Issue closed
Answers:
username_1: Fixed in https://github.com/sulu/sulu/issues/2620 |
XX-net/XX-Net | 288149895 | Title: 最新版9.3.6无法更新,总是下载到1MB多一点就提示download failed
Question:
username_0: XX-Net Status:
sys-platform: AMD64, Windows-10-10.0.16299
os-system: Windows
os-version: 10.0.16299
os-release: 10
os-detail: Version:10-0; Build:16299; Platform:2; CSD:; ServicePack:0-0; Suite:256; ProductType:0
architecture: 32bit,WindowsPE
browser: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36
xxnet-version: 3.9.3
python-version: 2.7.13
openssl-version: 16.0.0 TLSv1_2 h2:alpn
lan-proxy: Disable
use-ipv6: force_ipv6
gws-ip-num: total:430 ipv4:0 ipv6:384
ipv4-status: OK
ipv6-status: Fail
connected-link: new:0 used:0
worker: h1:0 h2:0
scan-ip-thread-num: 10
ip-quality: 318
is-idle: 0
block-stat: OK
proxy_state: Fail
ca_state: Fail
Appid_Working: true
Appids_Out_Of_Quota: false
Appids_Not_Exist: false
Using_Public_Appid: false
Answers:
username_1: 😂更理想的是手动下载,解压默认目录。。。
username_0: 手动下载慢的跟蜗牛一样,一两个小时都下不下来。
发送自 Windows 10 版邮件<https://go.microsoft.com/fwlink/?LinkId=550986>应用
username_2: 用idm开9线程下(idm不要用proxy了,直接就飞快)。
username_0: 好,下载完了,谢谢。
发送自 Windows 10 版邮件<https://go.microsoft.com/fwlink/?LinkId=550986>应用 |
billglover/character-stats | 1014465369 | Title: Store Vocab and Items in a local database
Question:
username_0: Retrieving vocab from Skritter every time we want to run analysis on the Items that have been studied is expensive (20+ API calls). The majority of these Items don't change between frequently. Storing these in a local database would reduce the number of calls we make to Skritter and enable offline analysis of Items that have been studied.
Consider:
* Using SQLite as the database engine
* Storing the date of the last sync<issue_closed>
Status: Issue closed |
plotly/plotly.js | 616076676 | Title: faceting react() bug
Question:
username_0: Sorry this isn't a more minimal codepen but there's something seriously wrong with the category orders here: https://codepen.io/username_0/pen/rNOKjpE
Note: the JSON here is basically what you'd get from PX in a Dash app when changing some filters.
Answers:
username_1: Here is a slightly more minimal [demo](https://codepen.io/MojtabaSamimi/pen/xxwzqmp?editors=0010).
username_1: Seems like there is no problem with `vertical` bars: [demo 1](https://codepen.io/MojtabaSamimi/pen/WNQyjer?editors=0010).
username_1: [demo 2](https://codepen.io/MojtabaSamimi/pen/ExVRmyE?editors=0010) illustrate that ther is no bug with `vertical` bars. So the issue appear to be only with `horizontal` ones.
username_0: Note that in the initial pen, there is no `xaxis2.matches` but there is `yaxis2.matches` ... if I add `xaxis2.matches="x"` to both in your vertical codepen the issue returns. So this is maybe related to `matches` with categories and react?
username_2: Possibly related: #4718
Status: Issue closed
|
stripe/stripe-python | 61679247 | Title: Misleading refund charge documentation
Question:
username_0: The following refund snippet can be found on stripe docs[0]:
```
import stripe
stripe.api_key = "<KEY>"
ch = stripe.Charge.retrieve("ch_15gJkK21pAOKYiG428xO2Had")
re = ch.refunds.create()
```
But I can't find refund**s** method within Charge class.
[0] https://stripe.com/docs/api/python#create_refund
Answers:
username_1: Can you check which version of the stripe bindings you have. Also, can you check which API version you are on (can be seen here: https://dashboard.stripe.com/account/apikeys)
Status: Issue closed
username_2: As @username_1 said, the `refunds` method will only exist if you're using a recent version of the Stripe API. |
metanorma/tex2mn | 569296764 | Title: Slashes get escaped in verbatim
Question:
username_0: E.g.
```asciidoc
[[projectile-cow]]
.The Projectile Cow with an accompanying cannon
[alt=The Projectile Cow with an accompanying cannon in ASCII]
....
.-.-.-.-.-.-.-.-.-.-.-.--.-.-.-.-.-.-.--.-.-.-.-.-.-.-.--.-.
_-_---__--__--___-___-__-____---___-________---____-____-__-
._.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.--..-.-.-.-.-.-..--.-
,..,.,.,.,.,..,.,,..,.,.,.,.,.,, ^^ .,,.,., ^^ .,.,.,.=
_>-.-.-.-._>_>_>_.-.-.-.-.-.-.-. \\\ .,.,. /// .-.-.-.-.
.,.,.,.,..,.,..,.,.,..,.,.,,..,., \ \_______/ / .,.,.,.,
.,.,.,.,..,.,.,.,..,,..,,.,.,.,.,. <[ {o} . ]> # .,.,.,.
.-.-.--.-.-.-.-.-.--.-.-.-.--.-.-. [ ______] .-.-.-.
.-.--.-.-.-.--.-.-.-.--.-.-.,.,., / [ ! ` `] .,.,..,.,.-
.,.,.,.-.-,l,-,l.-,.,.,.,-.,*. / {_!MOO!_} . ., . . ,
.-.-.-.-.-.-.-.-.-.-.-.-.-.- /M / -.-<>.,.,..-.-,
.-.-.--.-.-.-.-.-.-.-.-.--.. /MI LK\____ .-.-.-.-.-.
.-.-.-.--.-.-.-.-.-.-.-.-.- /MILK mil_____k ,.,.,..-,-
.-,-.-,-.,-.-,-.`-.-/-.. // -` // .-.p . .-.-.
.-.--.-.-.-.-.-.-.-. // ., // .-.-.-.-.-.-.-.-
.-.-.--.-.-.-.-.-.-. %____============ .-.-.--.-.-.-.-.-
-.-.-.-.--.-.-.-.-.-. ! ! .,-.-.-,-,--,-.-,-
,--.-.-,--.--.-.,--, \ \ .-,-,--.-,--,-.---,-.-,
,-.-.-,-,-.-,-,-.--, + > .-,--,-.--,-,-.-.-,--,-
,--.-,--,-,--.---,- .-,-,--.--,--,-.---,-,-.-.
.,.,.,.,..,.,.,.{A\ .,.,.,.,..,.,.,.,.,.,..,.,.,.,..,.,
.,.,.,.,.,.,.{GLASS\ .,..,.,.,.,.,..,.,.,.,.,.,.,..,.,.,.,
,..,.,,.,,.,{OF|MILK\..,.,.,.,.,..,.,.,.,.,.,..,.,.,.,.,.,.,
,.,..,.,,.,{ISWORTH},.,.,..,.,.,.,.,..,..,.,.,..,.,.,.,.,.,.
.,.,.,.,.{EVERYTNG}.-.-.--..-.-.-.-.--..--.-.-.-.-.--.-.-.-.
-.-.-.-{FORINFANTS}___--___-_-__-___--*(0~`~.,.,.,.,><><.><>
_-__-_{BUTBETTER}-.-,-,-,-,-,-,-,-,.-^^^^.-.-.-.-.^^^7>>>,..
.._...{WITH_HONEY}-.-.-.-.-.-.-.-.-.-.RANDOM(BUSH)SHRUBS>_..
GRASS_GRASS_GRASS_GRASS_GRASS_SOMEROCKS>GRASS>GRASS<GRASS>PC
SOIL_ROOTS_SOIL_SOIL_ROCKS_SOIL_GRASS_GRASS_GRASS_ROCKS_SOIL
CLAY_ROCKS_PEBBLES_CLAY_CLAY_CLAY_CLAY_GOLD_CLAY_CLAY><_WORM
ROOTS_CLAY_SKELETON_MORESOIL_CLAY_CLAY_CLAY_CLAY_<MUSHROOMS>
....
```
becomes
```latex
\begin{figure}[h]\centering
\label{projectile-cow}
\caption{The Projectile Cow with an accompanying cannon}
\alt{The Projectile Cow with an accompanying cannon in ASCII}
\begin{verbatim}
.-.-.-.-.-.-.-.-.-.-.-.--.-.-.-.-.-.-.--.-.-.-.-.-.-.-.--.-.
_-_---__--__--___-___-__-____---___-________---____-____-__-
._.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.--..-.-.-.-.-.-..--.-
,..,.,.,.,.,..,.,,..,.,.,.,.,.,, ^^ .,,.,., ^^ .,.,.,.=
_>-.-.-.-._>_>_>_.-.-.-.-.-.-.-. \\\ .,.,. /% // .-.-.-.-.
.,.,.,.,..,.,..,.,.,..,.,.,,..,., \ \_______/ / .,.,.,.,
.,.,.,.,..,.,.,.,..,,..,,.,.,.,.,. <[ {o} . ]> # .,.,.,.
.-.-.--.-.-.-.-.-.--.-.-.-.--.-.-. [ ______] .-.-.-.
.-.--.-.-.-.--.-.-.-.--.-.-.,.,., / [ ! ` `] .,.,..,.,.-
.,.,.,.-.-,l,-,l.-,.,.,.,-.,*. / {_!MOO!_} . ., . . ,
[Truncated]
.-.-.--.-.-.-.-.-.-. %____============ .-.-.--.-.-.-.-.-
-.-.-.-.--.-.-.-.-.-. ! ! .,-.-.-,-,--,-.-,-
,--.-.-,--.--.-.,--, \ \ .-,-,--.-,--,-.---,-.-,
,-.-.-,-,-.-,-,-.--, + > .-,--,-.--,-,-.-.-,--,-
,--.-,--,-,--.---,- .-,-,--.--,--,-.---,-,-.-.
.,.,.,.,..,.,.,.{A\ .,.,.,.,..,.,.,.,.,.,..,.,.,.,..,.,
.,.,.,.,.,.,.{GLASS\ .,..,.,.,.,.,..,.,.,.,.,.,.,..,.,.,.,
,..,.,,.,,.,{OF|MILK\..,.,.,.,.,..,.,.,.,.,.,..,.,.,.,.,.,.,
,.,..,.,,.,{ISWORTH},.,.,..,.,.,.,.,..,..,.,.,..,.,.,.,.,.,.
.,.,.,.,.{EVERYTNG}.-.-.--..-.-.-.-.--..--.-.-.-.-.--.-.-.-.
-.-.-.-{FORINFANTS}___--___-_-__-___--*(0~`~.,.,.,.,><><.><>
_-__-_{BUTBETTER}-.-,-,-,-,-,-,-,-,.-^^^^.-.-.-.-.^^^7>>>,..
.._...{WITH_HONEY}-.-.-.-.-.-.-.-.-.-.RANDOM(BUSH)SHRUBS>_..
GRASS_GRASS_GRASS_GRASS_GRASS_SOMEROCKS>GRASS>GRASS<GRASS>PC
SOIL_ROOTS_SOIL_SOIL_ROCKS_SOIL_GRASS_GRASS_GRASS_ROCKS_SOIL
CLAY_ROCKS_PEBBLES_CLAY_CLAY_CLAY_CLAY_GOLD_CLAY_CLAY><_WORM
ROOTS_CLAY_SKELETON_MORESOIL_CLAY_CLAY_CLAY_CLAY_<MUSHROOMS>
\end{verbatim}
\end{figure}
```<issue_closed>
Status: Issue closed |
Rocologo/MobHunting | 789754831 | Title: paper 1.16.5 broke mobhunting
Question:
username_0: updated spigot to 1.16.5 build 436.
[09:14:14] [Server thread/ERROR]: Error occurred while enabling MobHunting v7.5.7 (Is it up to date?)
java.lang.IllegalArgumentException: No enum constant one.lindegaard.Core.rewards.RewardType.MemorySection[path='212.type', root='YamlConfiguration']
at java.lang.Enum.valueOf(Enum.java:240) ~[?:?]
at one.lindegaard.Core.rewards.RewardType.valueOf(RewardType.java:5) ~[?:?]
at one.lindegaard.Core.rewards.Reward.read(Reward.java:259) ~[?:?]
at one.lindegaard.Core.rewards.RewardBlockManager.load(RewardBlockManager.java:131) ~[?:?]
at one.lindegaard.Core.rewards.RewardBlockManager.<init>(RewardBlockManager.java:39) ~[?:?]
at one.lindegaard.Core.Core.<init>(Core.java:57) ~[?:?]
at one.lindegaard.MobHunting.MobHunting.onEnable(MobHunting.java:143) ~[?:?]
at org.bukkit.plugin.java.JavaPlugin.setEnabled(JavaPlugin.java:263) ~[patched_1.16.5.jar:git-Paper-436]
at org.bukkit.plugin.java.JavaPluginLoader.enablePlugin(JavaPluginLoader.java:380) ~[patched_1.16.5.jar:git-Paper-436]
at org.bukkit.plugin.SimplePluginManager.enablePlugin(SimplePluginManager.java:483) ~[patched_1.16.5.jar:git-Paper-436]
at org.bukkit.craftbukkit.v1_16_R3.CraftServer.enablePlugin(CraftServer.java:501) ~[patched_1.16.5.jar:git-Paper-436]
at org.bukkit.craftbukkit.v1_16_R3.CraftServer.enablePlugins(CraftServer.java:415) ~[patched_1.16.5.jar:git-Paper-436]
at net.minecraft.server.v1_16_R3.MinecraftServer.loadWorld(MinecraftServer.java:464) ~[patched_1.16.5.jar:git-Paper-436]
at net.minecraft.server.v1_16_R3.DedicatedServer.init(DedicatedServer.java:239) ~[patched_1.16.5.jar:git-Paper-436]
at net.minecraft.server.v1_16_R3.MinecraftServer.w(MinecraftServer.java:935) ~[patched_1.16.5.jar:git-Paper-436]
at net.minecraft.server.v1_16_R3.MinecraftServer.lambda$a$0(MinecraftServer.java:173) ~[patched_1.16.5.jar:git-Paper-436]
at java.lang.Thread.run(Thread.java:834) [?:?]
[09:14:14] [Server thread/INFO]: [MobHunting] Disabling MobHunting v7.5.7
Answers:
username_1: Mobhunting is working fine for me in paper 1.16.5 using 7.5.8-SNAPSHOT-B1100, except the database connection closes very slowly. (both paper 446 and 437)
username_0: looks like somehow rewards.yml has been corrupted :/
found these things in the yml: &id001 and &id002.
One i found behind the id and on i found in location.
After this i found most of them behind type: value
here the example:
'96': &id001
location:
==: org.bukkit.Location
world: world
x: -1300.0
y: 66.0
z: 297.0
pitch: 0.0
yaw: 0.0
displayname: §9Shulker
'212':
location: &id002
==: org.bukkit.Location
world: world
x: 2.49211E7
y: 97.0
z: -2.4187739E7
pitch: 0.0
yaw: 0.0
displayname: §9Silverfish
money: '0.00000'
type: *id001
similar i found in type:
'218':
location:
==: org.bukkit.Location
world: world
x: -1119.0
y: 63.0
z: 424.0
pitch: 0.0
yaw: 0.0
displayname: §9Llama
money: '0.00000'
type: *id002 |
exercism/java | 602621216 | Title: ledger: implement exercise
Question:
username_0: The exercise **ledger** has not been implemented yet for the Java track.
The description of the exercise can be found in the [problem specification repository](https://github.com/exercism/problem-specifications/tree/master/exercises/ledger).
How to implement a new exercise for the Java track is described in detail in [CONTRIBUTING.md](https://github.com/exercism/java/blob/master/CONTRIBUTING.md#adding-a-new-exercise). Please have a look there first before starting working on the exercise.
Also please make sure it is clear that you are currently working on this issue, either by asking to be assigned to it, or by opening an empty PR.
When opening an PR, please reference this issue using any of the [closing keywords](https://help.github.com/en/articles/closing-issues-using-keywords).
If you have had a look at the exercise description and you concluded that the exercise might not be possible to implement in the Java language, please leave a comment and describe the problem.
In case you have any further questions, feel free to ask here. |
OfficeDev/office-js-docs | 246355754 | Title: Sideloading documentation doesn't explain how to deal with caching in Word 2016 (Windows 10)
Question:
username_0: Updating/refreshing previously loaded add-ins is fairly straightforward on OSX and web (windows and mac)
However, side-loading into Word 2016 on Windows 10 incurs some seriously sticky caching and there seems to be no easy way to force Word 2016 to "refresh" a previously loaded add-in without bumping the version number in the manifest.
Is there a way to delete a cache of a previously loaded add-in in Word 2016 on Windows? If so what is it, and could we have it in the documentation?
Answers:
username_0: Pages like this: https://dev.office.com/docs/add-ins/testing/create-a-network-shared-folder-catalog-for-task-pane-and-content-add-ins
username_1: @username_0 -- thank you for the documentation request and apologies for the delayed response. Regarding the question you've asked -- please see the answer on this Stack Overflow post: https://stackoverflow.com/questions/44309909/invalidate-ms-word-cache-for-testing-add-ins, which appears to provide the information that you're seeking. (I've also labeled this GitHub issue as a documentation request, so that it's on the radar for future consideration.)
username_1: In addition to the SO post that I linked to in my previous comment, the docs have also been updated with this info: https://docs.microsoft.com/en-us/office/dev/add-ins/testing/troubleshoot-manifest#clear-the-office-cache. (Closing this issue since it's been resolved.)
Status: Issue closed
|
cncf/landscape | 251135248 | Title: Please add OpenEBS to storage
Question:
username_0: https://github.com/openebs
Answers:
username_0: Project or Product name: OpenEBS
Is it open source?: Yes
What are the best URL(s) for it?:
https://github.com/openebs/openebs
www.openebs.io
What category does it belong in? Container Native Storage
What's the URL for the best logo (and it needs to include the name)?
https://github.com/openebs/chitrakala/blob/master/logo/png/logo-01.png
Anything else we need to know?: Nope, thank you!
username_1: Hi @username_0 thanks for the request, we'll be including OpenEBS in our upcoming version.
username_0: thank you @username_1
username_1: Thanks @username_0, closing as this update was included in the most recent version https://github.com/cncf/landscape#current-version
Status: Issue closed
|
florianholzapfel/express-restify-mongoose | 111247418 | Title: Somebody screwed up
Question:
username_0: Using `v2.2.0` my previous succeeding tests fail miserably. I'm currently tracking which one of you to blame. Undesired behaviour detected includes querying two models in a same route. Ex:
Having `Model1` and `Model2`
```http
GET /Model1/:id
```
Should query mongoose Model1, which does (let's assume).
Now
```http
GET /Model2/:id
```
Queries both Model1 and Model2, resulting in a 404 because model2 has no document with that id at all.
Previous versions of this library don't trigger that behaviour (including v2.1.0).
I will spend my morning trying to debug your files. I will be on you. I will not rest. You will fix this.
Answers:
username_0: using `v2.1.2` does not experience this issue, i'm still testing.
username_1: First off, change your tone. **This is unacceptable.**
Secondly, provide a sample to reproduce this issue.
username_0: Its business cat tone, i thought you'd get it. Anyway, challenge accepted. I'll post your homework.
username_0: I'll close this as is being reviewed in #182 .
Status: Issue closed
username_1: @username_0 fix is on npm as `2.3.0-rc.1`
username_0: Hey thanks for fixing it.
username_1: Thank @dvlsg |
pydii/pydii | 692103837 | Title: Problem creating vacancy, while reading from user input structure
Question:
username_0: Dear Developer,
I tried to make read the input structure. It seems it has some problems with creating vacancy structures.
I was using the following structure as POSCAR input: [https://materialsproject.org/materials/mp-1330/](url)
Here is the error:
Traceback (most recent call last):
File "/home/gnayak028/software/anaconda3/bin/gen_def_structure", line 11, in <module>
load_entry_point('pydii==1.9.0', 'console_scripts', 'gen_def_structure')()
File "/home/gnayak028/software/anaconda3/lib/python3.7/site-packages/pydii-1.9.0-py3.7.egg/pydii/scripts/gen_def_structure.py", line 258, in im_vac_antisite_def_struct_gen
struct_file=args.struct)
File "/home/gnayak028/software/anaconda3/lib/python3.7/site-packages/pydii-1.9.0-py3.7.egg/pydii/scripts/gen_def_structure.py", line 98, in vac_antisite_def_struct_gen
vac = Vacancy(structure=prim_struct, defect_site=site)
File "/home/gnayak028/software/anaconda3/lib/python3.7/site-packages/pymatgen/analysis/defects/core.py", line 54, in __init__
self._multiplicity = multiplicity if multiplicity else self.get_multiplicity()
File "/home/gnayak028/software/anaconda3/lib/python3.7/site-packages/pymatgen/analysis/defects/core.py", line 180, in get_multiplicity
raise ValueError("Site {} is not in bulk structure! Cannot create Vacancy object.".format(self.site))
ValueError: Site [ 0. -1.017259 -1.017259] Al is not in bulk structure! Cannot create Vacancy object.
Regards,
Ganesh
Answers:
username_1: Dear Ganesh,
Thank you for reporting this issue and I apologize for not getting back to you sooner.
Unfortunately, I was unable to reproduce this error. Are you using the latest version of pydii?
For example, since the link you gave was a materials project structure, I built pydii (e.g. `pip install .` in the main directory) and used the command:
`gen_def_structure --mpid mp-1330`
which produced five relevant directories (two of them corresponding to vacancies). I could also download the structure from the website and use
`gen_def_structure --struct AlN.cif`
to the same effect. It sounds like maybe you used your own POSCAR though? If you could please elaborate or share your input file, I might be able to try it out myself.
Best,
Enze
username_0: Dear Enze,
Sorry for the late reply. I was trying to run it in my calculations. You are right it takes from the calculations from the structure with .cif file. I managed that. Another problem I have the following.
During the profile scan with the structure it requires the mpid:
gen_def_profile -T 300 --file AlN/_raw_defect_energy.json
which produced the following error:
===========
ERROR: mpid is not given.
===========
Again, maybe that can be solved as the command '--formula ':
gen_def_profile --formula AlN -T 300 --file AlN/_raw_defect_energy.json
Also I tried the following command (simply putting mpid):
gen_def_profile --mpid mp-1330 -T 300 AlN/_raw_defect_energy.json
In both the above case, the error I got is the following:
No trial chemical potential is given.
Traceback (most recent call last):
File "/home/ganesh/.local/bin/gen_def_profile", line 11, in <module>
load_entry_point('pydii==1.9.0', 'console_scripts', 'gen_def_profile')()
File "/home/ganesh/.local/lib/python3.6/site-packages/pydii/scripts/gen_def_profile.py", line 117, in im_vac_antisite_def_profile
conc_dat, en_dat, mu_dat = get_def_profile(args.mpid, args.temp, file_name)
File "/home/ganesh/.local/lib/python3.6/site-packages/pydii/scripts/gen_def_profile.py", line 36, in get_def_profile
antisites, T, plot_style='gnuplot')
File "/home/ganesh/.local/lib/python3.6/site-packages/pydii/dilute_solution_model.py", line 679, in compute_defect_density
structure, e0, vac_defs, antisite_defs, T, trial_chem_pot=trial_chem_pot)
File "/home/ganesh/.local/lib/python3.6/site-packages/pydii/dilute_solution_model.py", line 401, in dilute_solution_model
mu_vals = compute_mus_by_search()
File "/home/ganesh/.local/lib/python3.6/site-packages/pydii/dilute_solution_model.py", line 300, in compute_mus_by_search
raise ValueError()
ValueError
Please help me to solve this.
Thank you in advance,
Regards,
Ganesh
username_1: Dear Ganesh,
Thank you for the detailed reply. I'm glad you were able to generate your structures.
with the **ERROR** that you reported, your conclusion is correct. Either `--mpid` or `--formula` is required, even if `--file` is supplied. This is poorly documented. Sorry about that.
As to the error that you're getting, I **was** able to reproduce it. Thanks for pointing it out. At the moment I will not be able to do any further testing as NERSC is down, but I can offer a few comments.
- I'm pretty unfamiliar with `AlN` and noticed that it was an insulator with a pretty sizable bandgap. Taking this into account...
- The default VASP settings won't be very suitable for your problem. Hopefully the ones I used for my test calculations were OK. I'm assuming you checked this though for your own calculations.
- Perhaps more importantly, this library was based off of the dilute solution model (see [this paper](https://link.aps.org/doi/10.1103/PhysRevB.63.094103)) and I just don't know how suitable it is for nonmetallic structure with such strong covalent bonds.
- It seems like a numeric problem, and for this I apologize as it's on us.
- I noticed the defect excitation energies were quite large, and so maybe somewhere in an exponential the terms are being reduced to zero or blowing up.
- The current method of obtaining the solution relies on the `sympy` library and a bit of guess-and-check, so it's totally possible that the solution to your problem is outside of the bounds that the code currently considers.
Truthfully, I'm not sure how quickly I'll be able to get to the root of this issue, but I will try my best. Happy to hear your thoughts on any of the above.
Cheers,
Enze
username_0: Dear Enze,
Thank you very much for the quick reply.
It will be very helpful if you could help with this numeric problem.
Again, I will test it for the metallic system like TiN and maybe we can come to a conclusion about this numeric problem you are indicating.
Thank you very much for your help for now and the future.
Regards,
Ganesh
username_2: To follow up on what Enze mentioned regarding the error in
compute_mus_by_search, the method used to find mus was based on brute-force
grid search.
I implemented it at a time I was not very familiar with root finding and
also with chemical potentials. Better techniques may be available, but if
the grid search fails, it means there is no numerical solution to the
problem.
Though it looks like a solver error, essentially it implies a solution to
chemical potentials could not be found meaning the problem is ill-defined.
Ill-defined problem could imply two things:
1) There are some issues with the parameters supplied (You might have
incorrectly computed some of the parameters). It's so easy to introduce
errors in defect calculations.
2) Dilute Solution Method is not applicable for the system.
#2 is a very possible scenario because the applicability of dilute solution
model is quite limited. Special quasi-random structure (SQS) method is more
stable and you may attempt to solve your problem using SQS.
<NAME>
Engineer
Princeton Plasma Physics Lab (PPPL)
username_0: Dear Enze and Bharat,
Thank you very much for your support and reply.
I am testing it in the case of TiN. I will report the results.
One more thing. I wonder why there are not interstitial sites (as a solute as well as solvent) as a defect concentration. Could you able help me with the following?
Thank you very much in advance!!
Best Regards,
Ganesh |
Nesinn/f1-3-c2p1-colmar-academy | 278194077 | Title: SUBTLETY - media queries
Question:
username_0: nice job using media queries to help the page adapt to changes in size. for the most part, things look great! there are just a couple sections that i noticed that could use some extra attention -
https://github.com/Nesinn/f1-3-c2p1-colmar-academy/blob/master/Capstone/index.html#L82
check out the page from about 600px wide to the break at 400px. these small images stay the same height so when the width changes they become stretched and a little funky looking.
https://github.com/Nesinn/f1-3-c2p1-colmar-academy/blob/master/Capstone/res/css/styles.css#L79
when resizing from large to small, this white border on the nav items temporarily pops into the page. the rest of the time it is not noticeable at all, so it might be best to just get rid of this border unless you're using it for something else.
take a look through the rest of the page for other sections or elements that might look funny during resize. remember that usually a simple solution is better than handling each little situation. look for solutions that resolve a variety of problems or set you up to avoid the problems altogether - like the flex-wrap handling re-organizing the courses. |
square/leakcanary | 462368667 | Title: Failure 1.6.3 31007b4
Question:
username_0: Read this first: https://github.com/square/leakcanary#can-a-leak-be-caused-by-the-android-sdk
### LeakTrace information
```
In com.furiosojack.parastickerapp:6.0.0-beta04:96.
* FAILURE in 1.6.3 31007b4:java.lang.NullPointerException: Attempt to invoke virtual method 'com.squareup.haha.perflib.ClassObj com.squareup.haha.perflib.Instance.getClassObj()' on a null object reference
at com.squareup.leakcanary.HeapAnalyzer.checkForLeak(HeapAnalyzer.java:180)
at com.squareup.leakcanary.internal.HeapAnalyzerService.onHandleIntentInForeground(HeapAnalyzerService.java:67)
at com.squareup.leakcanary.internal.ForegroundService.onHandleIntent(ForegroundService.java:55)
at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:76)
at android.os.Handler.dispatchMessage(Handler.java:105)
at android.os.Looper.loop(Looper.java:164)
at android.os.HandlerThread.run(HandlerThread.java:65)
* Reference Key: a39b7667-8630-4e2c-a7b8-657ad3c51f06
* Device: Google google Android SDK built for x86 sdk_gphone_x86
* Android Version: 8.0.0 API: 26 LeakCanary: 1.6.3 31007b4
* Durations: watch=11328ms, gc=124ms, heap dump=3582ms, analysis=1955ms
* Excluded Refs:
| Field: android.os.Message.obj
| Field: android.os.Message.next
| Field: android.os.Message.target
| Field: android.view.inputmethod.InputMethodManager.mNextServedView
| Field: android.view.inputmethod.InputMethodManager.mServedView
| Field: android.view.inputmethod.InputMethodManager.mServedInputConnection
| Field: android.view.inputmethod.InputMethodManager.mCurRootView
| Field: android.accounts.AccountManager$AmsTask$Response.this$1
| Field: android.view.accessibility.AccessibilityNodeInfo.mOriginalText
| Field: com.android.internal.policy.BackdropFrameRenderer.mDecorView
| Field: android.view.Choreographer$FrameDisplayEventReceiver.mMessageQueue (always)
| Thread:FinalizerWatchdogDaemon (always)
| Thread:main (always)
| Thread:LeakCanary-Heap-Dump (always)
| Class:java.lang.ref.WeakReference (always)
| Class:java.lang.ref.SoftReference (always)
| Class:java.lang.ref.PhantomReference (always)
| Class:java.lang.ref.Finalizer (always)
| Class:java.lang.ref.FinalizerReference (always)
```
Answers:
username_1: Please upgrade to the [latest release](https://square.github.io/leakcanary/changelog) of LeakCanary.
Status: Issue closed
|
altimetrik-onboarding-uy/soverman | 329134491 | Title: 2nd Part User Story - 3.: Switch Views by categories
Question:
username_0: As Manager, I would like to be able to choose the compensations to be shown on screen from those available in the system in order to simplify our data review process. Would be great if I can select them by categories: salary, study, research or to see them all together. |
fiznool/passport-oauth2-refresh | 263771785 | Title: Inherit strategy's overwritten getOAuthAccessToken function
Question:
username_0: Some strategies overwrite the `_oauth2.getOAuthAccessToken` function, mostly for the purpose of custom headers (e.g. Reddit and Spotify require HTTP Basic Auth headers with the Refresh request). Here's [the Reddit strategy](https://github.com/Slotos/passport-reddit/blob/master/lib/passport-reddit/strategy.js), and [the Spotify strategy](https://github.com/JMPerez/passport-spotify/blob/master/lib/passport-spotify/strategy.js).
In this repo, the `strategy._oauth.constructor` from #3 still constructs the with the `getOAuthAccessToken` from the original `oauth` module, instead of inheriting the Strategy's overwrite. So Reddit isn't receiving the required header and is failing.
Any thoughts on how to get the strategy's version of the function to be inherited and called instead? If I figure it out I'd be happy to submit a PR.<issue_closed>
Status: Issue closed |
aws/aws-ofi-nccl | 452244747 | Title: rdma/fabric.h not found
Question:
username_0: I've installed EFA driver succesfully on AWS EC2 Deep Learning AMI. I've also built NCCL 2.4.2. When building aws-ofi-nccl, I get the following error during configure
```
[ec2-user@ip-10-0-48-207 aws-ofi-nccl]$ ./configure --with-cuda=/usr/local/cuda/ --with-nccl=$NCCL_HOME --with-mpi=/opt/amazon/efa/bin/
checking whether make supports nested variables... yes
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether make sets $(MAKE)... yes
checking for ar... ar
checking the archiver (ar) interface... ar
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for stdbool.h that conforms to C99... yes
checking for _Bool... yes
checking for inline... inline
checking for size_t... yes
checking for ssize_t... yes
checking for uint64_t... yes
checking for stdlib.h... (cached) yes
checking for GNU libc compatible malloc... yes
checking for memset... yes
checking for realpath... yes
checking limits.h usability... yes
checking limits.h presence... yes
checking for limits.h... yes
checking for stdlib.h... (cached) yes
checking for string.h... (cached) yes
checking for unistd.h... (cached) yes
checking rdma/fabric.h usability... no
checking rdma/fabric.h presence... no
checking for rdma/fabric.h... no
configure: error: unable to find required headers
[ec2-user@ip-10-0-48-207 aws-ofi-nccl]$ make -j 32 NCCL_HOME=$NCCL_HOME
make: *** No targets specified and no makefile found. Stop.
```
I have libfabric source, but if I make and install it, it overwrites EFA libfabric. Any ideas?
Answers:
username_1: @username_0 One of the dependencies of the plugin is libfabric as well which provides the required header. Do you have an installation of that? If yes, please configure plugin to use with `--with-libfabric` parameter.
username_0: libfabric is installed by the EFA driver installation script. I'm using Amazon Linux and can confirm that libfabric and libfabric-devel were installed by yum. Is there a specific location I should point to for this installation?
username_1: Yes, EFA installation script installs the binaries and libraries at `/opt/amazon/efa`. You could use that for configuring the plugin.
Status: Issue closed
username_0: `--with-libfabric=/opt/amazon/efa/` solved my problem, thanks. |
Penetrum-Security/Maltree-Issue-Repo | 713724451 | Title: Maltree Issue (2629dd13965f856)
Question:
username_0: Python version `2.717`
Traceback:
```
Traceback (most recent call):
File "/home/malcore/bin/maltree/api/cuckoo_api.py", line 41, in status_check
req = requests.get(req_url, headers=self.headers)
File "/home/malcore/bin/maltree/venv/local/lib/python2.7/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/home/malcore/bin/maltree/venv/local/lib/python2.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/malcore/bin/maltree/venv/local/lib/python2.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/home/malcore/bin/maltree/venv/local/lib/python2.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/home/malcore/bin/maltree/venv/local/lib/python2.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
HTTPConnectionPool(host='XXX.XXX.XXX.XXX', port=8090): Max retries exceeded with url: /cuckoo/status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efdd12c7510>: Failed to establish a new connection: [Errno 113] No route to host',))
```
Running platform: `Linux-5.3.0-28-generic-x86_64-with-Ubuntu-18.04-bionic` |
mono/SkiaSharp | 1175587292 | Title: PathTooLongException: libSkiaSharp.LastIndexOf('.app' is too long, or a component of the specified path is too long.
Question:
username_0: Receiving this bug when Building against Xamarin iOS on Visual Studio Windows hooked up to a Mac Mini build agent.
```
Error MessagingRemoteException: An error occurred on client Build while executing a reply for topic xvs/build/execute-task/AndyTV.Mobile.iOS/9ca172d002fCodesign
AggregateException: One or more errors occurred.
PathTooLongException: The path '/Users/andrewherrick/Library/Caches/Xamarin/mtbs/builds/AndyTV.Mobile.iOS/9ca172d1755ab873a7d3e57a2388652ef1e841bf1a8a4e400d32e267460b835b/obj\iPhone\Debug\device-builds\iphone12.3-15.3.1/codesign\\bin\iPhone\Debug\device-builds\iphone12.3-15.3.1\AndyTV.Mobile.iOS.app\\Frameworks\libSkiaSharp.framework\libSkiaSharp.Substring(bin\iPhone\Debug\device-builds\iphone12.3-15.3.1\AndyTV.Mobile.iOS.app\\Frameworks\libSkiaSharp.framework\libSkiaSharp.LastIndexOf('.app' is too long, or a component of the specified path is too long.
```
```
Microsoft Visual Studio Professional 2022
Version 17.2.0 Preview 2.0
VisualStudio.17.Preview/17.2.0-pre.2.0+32314.265
Microsoft .NET Framework
Version 4.8.04084
Installed Version: Professional
Visual C++ 2022 00476-80000-00000-AA254
Microsoft Visual C++ 2022
.NET Core Debugging with WSL 1.0
.NET Core Debugging with WSL
ADL Tools Service Provider 1.0
This package contains services used by Data Lake tools
ASA Service Provider 1.0
ASP.NET and Web Tools 2019 17.2.240.24236
ASP.NET and Web Tools 2019
Azure App Service Tools v3.0.0 17.2.240.24236
Azure App Service Tools v3.0.0
Azure Data Lake Tools for Visual Studio 2.6.5000.0
Microsoft Azure Data Lake Tools for Visual Studio
Azure Functions and Web Jobs Tools 17.2.240.24236
Azure Functions and Web Jobs Tools
Azure Stream Analytics Tools for Visual Studio 2.6.5000.0
Microsoft Azure Stream Analytics Tools for Visual Studio
C# Tools 4.2.0-2.22159.10+f3a5bad242b7a7b8149ae644de0a61c2f1bffc8d
C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
CleanBinAndObjCommand Extension 1.2.58
CleanBinAndObjCommand Visual Studio Extension Detailed Info
CodeMaid 12.0
CodeMaid is an open source Visual Studio extension to cleanup and simplify our C#, C++, F#, VB, PHP, PowerShell, R, JSON, XAML, XML, ASP, HTML, CSS, LESS, SCSS, JavaScript and TypeScript coding.
Common Azure Tools 1.10
Provides common services for use by Azure Mobile Services and Microsoft Azure Tools.
Extensibility Message Bus 1.2.6 (master@34d6af2)
Provides common messaging-based MEF services for loosely coupled Visual Studio extension components communication and integration.
[Truncated]
Visual Studio extension to enable development for Xamarin.iOS and Xamarin.Android.
Xamarin Designer 172.16.31.10 (remotes/origin/main@3d19e6caf)
Visual Studio extension to enable Xamarin Designer tools in Visual Studio.
Xamarin Templates 17.1.13 (bb31b34)
Templates for building iOS, Android, and Windows apps with Xamarin and Xamarin.Forms.
Xamarin.Android SDK 12.2.99.130 (main/e89ae42)
Xamarin.Android Reference Assemblies and MSBuild support.
Mono: f34bd77
Java.Interop: xamarin/java.interop/main@aae23c97
ProGuard: Guardsquare/proguard/v7.0.1@912d149
SQLite: xamarin/sqlite/3.38.0@ccd83d8
Xamarin.Android Tools: xamarin/xamarin-android-tools/main@f0b3abd
Xamarin.iOS and Xamarin.Mac SDK 15.7.0.410 (478c1d2c8)
Xamarin.iOS and Xamarin.Mac Reference Assemblies and MSBuild support.
```
Answers:
username_1: I've got the same issue since updated to VS22 Preview 2.0, but when trying with a previous version all is running fine |
jsis/monthly-meetup | 96943494 | Title: Speaker: <NAME>
Question:
username_0: @username_1 is very passionate about [Mithrill.js](https://lhorie.github.io/mithril/). Are you willing to talk on the nest meetup?
Answers:
username_1: Definitely :), looking forward to it.
username_2: Awesome! Mithril was one of the libraries/frameworks I evaluated for a new project about a year ago. Ultimately I went for React but Mithril came in about 2nd or 3rd place of suitability for the project. I'm really intrigued by the simplicity and blazing speeds it boasts.
username_0: @username_1 Are you ready to give your talk on the meetup the 17th of September?
username_1: Come hell or high water ;)
username_0: There are many events on the 17th of September so we are thinking about moving it by one week, to the 24th of September. Is that ok with you @username_1 ?
username_1: absolutely
Status: Issue closed
|
gtgalone/react-quilljs | 567988316 | Title: NPM package depends on tslib
Question:
username_0: v1.2.0 introduced a dependency on tslib in the generated NPM package. When trying to us react-quilljs the following error is thrown:
```
DependencyNotFoundError
Could not find dependency: 'tslib' relative to '/node_modules/react-quilljs/lib/index.js'
```
See [Code Sandbox](https://codesandbox.io/s/react-quilljs-3-ei75s) that reproduces the issue.
You can also see it in this NPM Package diff from [v1.1.3...1.2.5](https://diff.intrinsic.com/react-quilljs/1.1.3/1.2.5#file-esm__index.js)
Answers:
username_1: @username_0
Really Thank you for the issue!
I have removed tslib v1.2.6.
Status: Issue closed
username_0: v1.2.6 fixes the code sandbox repo. Thanks for the quick fix! |
aod6060/project_clean | 553204568 | Title: Feature Request: Wrap up option menu...
Question:
username_0: **Is your feature request related to a problem? Please describe.**
#20 This issue needs to be solved "class mode"
**Describe the solution you'd like**
I want to wrap options menu because its will have a ton of little components.
Answers:
username_0: I'm going to wait on this for later...
Status: Issue closed
|
jazzfool/reclutch | 524079622 | Title: Add alternative graphics implementations
Question:
username_0: Currently an optional OpenGL-based Skia implementation of `GraphicsDisplay` is provided, but Skia presents a large build dependency.
In interest of accessibility (and maybe a 100% Rust project), some alternative implementations should be provided;
- `raqote` (+ `image` for advanced effects), only woe is performance.
- `lyon` + `wgpu`.
- HTML `canvas`.
The strongest contender is `lyon` & `wgpu` as `wgpu` will soon expand into web, and would also mean hardware acceleration.
Answers:
username_0: Progress tracking at #14
username_0: Putting this in the backlog, although Skia is a non-Rust dependencies, the `skia-safe` developers have already implemented functionality to gracefully use pre-built binaries, making building it very painless.
Variety in rendering back-ends is preferable, but Skia works very well already, but I'll leave this issue open in case a Rust-based 2D graphics library shows up (perhaps Piet, but lack of backdrop filters and selective anti-aliasing is holding it back).
username_1: I don't know that this is really valuable to do.
I found it pretty annoying that Skia internals aren't `pub` in this library, so much so I tore it apart to make this:

https://github.com/username_1/imgui-skia-example
Thank you very much for the example code, but I'd ask you to just give us `pub` access to Skia.
username_0: So you want to use the `reclutch::skia` module to initialize a Skia surface? I could expose a `&skia::Canvas` closure as a specialized display command if that works? With that you could do something like this;
```rust
let mut display: SkiaGraphicsDisplay = /* ... */;
display.push_draw_closure(|canvas: &mut skia::Canvas| {
canvas.draw_rect(/* ... */);
// etc
}, ZOrder::default(), None, None);
```
username_1: Yes that works for me and it would make it so that I could use the real Reclutch :-)
I love your library don't get me wrong but I just had to do it this way because I need direct access to Skia for what I want to do here because my glyph editor uses lots of Skia APIs.
username_1: Your code that initializes the textures and the surfaces is really good, I was having a lot of trouble doing it myself so I decided why reinvent the wheel, your code is MIT so I can just borrow it. However I would like to actually use you as a dependency and not vendorized :-)
username_1: This has been fixed and Qglif is now using your awesome library and not my hack vendorized version 🙌
Status: Issue closed
username_0: Closing: Skia is holding up well. |
EdgeTranslate/EdgeTranslate | 658622546 | Title: 无法自定义划词查询的目标语言。
Question:
username_0: 源语言:英语,划词显示翻译为英语而不是中文,没找到地方改成中文

Firefox 78.0.2
插件版本1.7.2
Answers:
username_1: 点击浏览器右上角的侧边翻译图标,然后点击弹出框中的那个向下的箭头就可以看到语言设定了。
Status: Issue closed
|
i18next/i18next | 381942987 | Title: defaultNS option is ignored
Question:
username_0: I upgraded from v11.6.0 => v12.0.0 and noticed that the defaultNS property in options is not working.
My options look like this:
`const options = {
fallbackLng: 'en-US',
lng: 'en-US',
lowerCaseLng: true,
load: 'currentOnly',
ns: 'common',
defaultNS: 'common',
debug: false,
react: { wait: true },
keySeparator: '~',
nsSeparator: false,
interpolation: { escapeValue: false }, // not needed for react
};`
I noticed in the documentation that the default value for defaultNS was changed from 'common' to 'translation' but this should not affect my current code since I explicitly set 'common' as my defaultNS and ns, and yet the t function still tries to load 'translation'. Any ideas how to fix this?
Answers:
username_1: defaultNS always was `translation`. Guess cause is react only: https://github.com/i18next/react-i18next/blob/master/src/NamespacesConsumer.js#L95
could you open an issue there: https://github.com/i18next/react-i18next
username_1: but even there it only would "emergency" fallback in cases there is no i18n instance found:
https://github.com/i18next/react-i18next/blob/master/src/NamespacesConsumer.js#L88 -> https://github.com/i18next/react-i18next/blob/master/src/NamespacesConsumer.js#L105
Could you please add more code to see what is wrong in your codebase?
username_0: yeah I'll open an issue there, I see that common.json is loaded but t function tries to load from translation, even though I explicitly set the defaultNS to common. Only when I change the name of the file to translation.json it works:
This is with the options that I wrote before:


So for some reason I am getting defaultNS undefined even though I set it to 'common' and you can see the t function uses the namespace 'translation'
username_1: best provide a sample on https://codesandbox.io/s/l4qrory2nl...seems really something is wrong with your configuration
username_0: fixed this by changing ns and defaultNS to "translation" and changing common.json -> translation.json
username_1: If you like this module don’t forget to star this repo. Make a tweet, share the word or have a look at our https://locize.com to support the devs of this project -> there are many ways to help this project :pray:
Status: Issue closed
|
coldclimate/beerclub | 129168491 | Title: More beers needed
Question:
username_0: Beer advocate (http://www.beeradvocate.com/) can be scraped easily because it's beers have a unique number id.
Ideally we could store those information on a ElasticSearch node, and could dump the data into json
Answers:
username_0: We could even use this scraper https://github.com/sbuss/beeradvocate-scraper |
gaearon/react-hot-loader | 177737079 | Title: "If you know a good guide on migrating to Webpack, let me know and Iʼll link to it"...
Question:
username_0: Found this:
Migrating to webpack from require.js:
https://gist.github.com/xjamundx/b1c800e9282e16a6a18e
https://www.youtube.com/watch?v=NEJyIBwo-ik
https://github.com/kentcdodds/require-to-webpack-todomvc
Hot reload for backbone/Marionette apps:
http://avitureblog.com/2016/09/01/webpack-hot-reloading.html
FYI
Answers:
username_1: Thanks! The website's a bit out of date now (update soon!), and there are tons of Webpack articles at this point. We'll probably just point to the Webpack docs in the future.
Status: Issue closed
|
hotosm/osma-health | 317754243 | Title: Possible enhancement: 3d tiles
Question:
username_0: To visually compare population density with relative completeness, it could be interesting to think about a 3d map, where the height of the tile is the population/building density. This would allow a user to quickly pan over to the largely populated areas that are under-mapped.
Answers:
username_0: 
Here's how it could possibly look, with minimal styling (just population numbers on a linear ramp). By looking at the bars, we can notice those that are tall + red to target first.
username_1: I like this idea @username_0 - perhaps as an option to switch back and forth. This certainly gives a lot more context.
@username_2 @smit1678 - this is a possible enhancement, we won't have time to build this in 100% but if you all like this we'll give it a shot after wrap up.
username_2: I really like this as well and would love to move this forward. Will need @smit1678 to weigh in as well. |
start-jsk/rtmros_tutorials | 1070577958 | Title: HIRONXJSKのend_coordsの傾きがhrpsysの内外で15度ずれている?
Question:
username_0: https://github.com/start-jsk/rtmros_common/pull/1114 を書いていて気づいたのですが、
HIRONXJSKのend_coordsは、URDFやeusモデルでは初期姿勢のときに親リンクから親リンク座標系で90度回転したものとして定義されていますが、
https://github.com/start-jsk/rtmros_tutorials/blob/2edfaf5ef6c35ffdd5bbfb6799dd5c2090f31c8c/hrpsys_ros_bridge_tutorials/models/hironxjsk.yaml#L31
https://github.com/start-jsk/rtmros_tutorials/blob/2edfaf5ef6c35ffdd5bbfb6799dd5c2090f31c8c/hrpsys_ros_bridge_tutorials/CMakeLists.txt#L291
https://github.com/start-jsk/rtmros_common/pull/925 https://github.com/start-jsk/rtmros_common/pull/1114 で書かれているOpenHRP3の仕様の影響で、hrpsysの中では初期姿勢のときにworld座標系からworld座標系で90度回転したものとして扱われてしまうので、
肩の関節軸が15度ずれているHIRONXJSKは、hrpsysの内外でend_coordsの傾きが15度ずれてしまっているかもしれません。
Answers:
username_0: 直すためには、
https://github.com/fkanehiro/hrpsys-base/blob/bdbc4c8809b8d044e9bcd6583bda6b5bffb7c8b7/rtc/ImpedanceController/ImpedanceController.cpp#L188
のような箇所全てに、左から親リンクの`Rs`をかける必要がありそう
username_1: @username_0 ごめん、全くわかってなくて教えてほしいんだけど、end_coordsの傾きがhrpsysの内外で異なると、具体的な問題としては何が起こるんだろう。
hrpsysの中では矛盾なく成立するだろうからインピーダンス制御とかは動くけど、hrpsysで計測した力をROSの世界に持ってくると、向きがおかしくなる、みたいなことなのかな。
username_0: hrpsysの中では矛盾なく成立するので,HIRONXの用途だと,具体的な困る点は
- ImpedanceControllerのパラメータ`force_gain` `moment_gain`をX,Y,Z軸ごとに変えたいときに,ROSのend_coordsの軸とは15度ずれた軸が変わってしまう
だけだと思います.
hrpsysで計測した力はセンサ座標系でROSに持ってきているので,end_coordsのズレの影響は受けません.
Status: Issue closed
username_1: なるほど、ありがとう。
`*_gain`をROSから変えるインターフェースって、確かidlからの自動生成だったと思うから、絶妙にパッチ当てにくいですね。
さらにその上の、PythonとかEusLispのインターフェースにパッチを当てることならできそうだけど、Hiroで`*_gain`を変えること自体を、ここ3年くらいやってなくて、
username_1: https://github.com/start-jsk/rtmros_common/pull/1114 を書いていて気づいたのですが、
HIRONXJSKのend_coordsは、URDFやeusモデルでは初期姿勢のときに親リンクから親リンク座標系で90度回転したものとして定義されていますが、
https://github.com/start-jsk/rtmros_tutorials/blob/2edfaf5ef6c35ffdd5bbfb6799dd5c2090f31c8c/hrpsys_ros_bridge_tutorials/models/hironxjsk.yaml#L31
https://github.com/start-jsk/rtmros_tutorials/blob/2edfaf5ef6c35ffdd5bbfb6799dd5c2090f31c8c/hrpsys_ros_bridge_tutorials/CMakeLists.txt#L291
https://github.com/start-jsk/rtmros_common/pull/925 https://github.com/start-jsk/rtmros_common/pull/1114 で書かれているOpenHRP3の仕様の影響で、hrpsysの中では初期姿勢のときにworld座標系からworld座標系で90度回転したものとして扱われてしまうので、
肩の関節軸が15度ずれているHIRONXJSKは、hrpsysの内外でend_coordsの傾きが15度ずれてしまっているかもしれません。
username_1: (操作ミスで一瞬closeしました、すみません)
username_2: ちゃんと理解できていないかもしれないけど,
https://github.com/start-jsk/rtmros_tutorials/issues/603#issuecomment-985536653
をmerge したら,hironx のimpedanceで`_gain` をセットしたときに,現状と15度違う(ただしworldcoordsとそろっている)
という事でいいかな?
で,@username_1 のD論中にgain を設定している場面がなければ,https://github.com/start-jsk/rtmros_common/pull/1114をマージしてもいいのかな.
username_1: @username_2
僕の理解が正しければ、https://github.com/start-jsk/rtmros_common/pull/1114 をマージしても、HIRONXには何も影響なくて、https://github.com/start-jsk/rtmros_common/pull/1114 に関わらず、とにかく`*_gain`の軸が想定と15度ずれているよということを@username_0 は教えてくれたんだと思います。
なぜかというと、HIRONX(に限らず全てのhrpsys_ros_bridgeを使うロボット)が力センサ値をROSに渡すコードには既に、https://github.com/start-jsk/rtmros_common/pull/1114 に相当する変更がhttps://github.com/start-jsk/rtmros_common/pull/925 として入っているからです。
https://github.com/start-jsk/rtmros_common/pull/1114 は、hrpsysで視覚センサを扱うロボットをROSに繋ぐ際に必要な変更で、最近の実機では珍しくて、主にhrpsys-simulatorとchoreonoidをROSに繋ぐための修正なんじゃないかと思っています。
HIRONXは、僕の知る限り、hrpsys-simulatorをROSに繋いで視覚処理、みたいなことはしたことがなくて、choreonoidに至っては全く使ってないので、影響ないはずです。
なので、https://github.com/start-jsk/rtmros_common/pull/1114 に関しては、詳しい人(どなたかわかりませんが・・・@username_2 になるのでしょうか)の同意が得られれば、mergeして問題ないはずです。
で、それとは全く別の話として、僕はこれまでは`*_gain`の設定をやってきていませんが、これからする可能性があります。
必要になったら、軸が15度ずれてることを念頭において設定するか、何らかの方法でパッチ当てるか、考えたいと思っています。
上記の理解で合ってますでしょうか。->@username_0
username_0: end_coordsの座標のズレに対する根本的な治療は、https://github.com/start-jsk/rtmros_tutorials/issues/603#issuecomment-985536653 のようにhrpsys内でend_coordsのlocalpとlocalRにRsを左からかけることで、そうするとend_coordsの軸が15度ずれなくなります。 |
phase-0/phase-0 | 310364800 | Title: 5.4 Event-driven DOM modification
Question:
username_0: # 5.4 Event-driven DOM modification
So far in our game-making quest we've seen how to _bind event listeners_ to DOM elements, and how to change element appearance using classes. Now we're going to get you to combine some of those techniques with a bigger game board!
- [ ] Start Toggl.
- [ ] Read the [_Events and classes resource_](https://github.com/dev-academy-programme/curriculum/blob/master/resources/js-events-and-css-classes-ARTICLE/README.md). If you have questions during this assignment, hopefully they'll be answered in the resource! (If not, please let us know in the comments below so we can improve it.)
- [ ] Visit the following repository on GitHub: https://github.com/dev-academy-challenges/dom-events-and-classes. Click the **Fork** button.
- [ ] Clone ***your*** copy of the repo. The terminal command will look like `git clone https://github.com/your-username/dom-events-and-classes`. Replace `your-username` with your GitHub username.
- [ ] Load the `index.html` file contained in that repository in your browser. You'll see three rows of three divs: your "game board". They should all be grey. We've added a rollover effect using CSS, but otherwise they don't do anything - yet!
- [ ] You'll be using similar techniques to the previous assignments, but you'll have to find the elements using **class** instead of **id**. If you get stuck, look at some of the examples in the resource linked above.
- [ ] Take a look at the `style.css` file. For this assignment you don't actually have to make any changes to it, but notice that there are CSS classes for `div.blue`, `div.green`, and `div.invisible`.
- [ ] Write a function `makeBlue` that takes an `evt` parameter, just as you did last time. `makeGreen` is provided if you need an example.
- [ ] In your function, add the `blue` class to the event's target element using `classList.toggle`.
- [ ] At the end of the function, call `updateCounts()`.
- [ ] Note: `makeGreen` uses `preventDefault()` to stop the right mouse button's context menu from appearing. You don't need it for the other event handlers.
- [ ] When you've written `makeBlue`, add an event listener for it in the `bindEventListeners` function beneath the one for `makeGreen`.
- [ ] It should look very similar to the one above it, but use the 'click' event and your `makeBlue` function.
- [ ] Reload the page in your browser and test the changes. If all goes well, when you click the left mouse button on a dot it should go blue, and go green if you click the right mouse button!
- [ ] `git commit`!
- [ ] Write a function `hide` that takes an `evt` parameter.
- [ ] It'll look almost exactly the same as the first two event handlers, but add the class `invisible`.
- [ ] Don't forget to call `updateCounts()` at the end of the function.
- [ ] Add an event listener for your `hide` function to `bindEventListeners`. Use the 'dblclick' event.
- [ ] Reload in the browser and test. A double-click should make dots disappear, and they should reappear with a second double-click in the same place.
- [ ] Commit your changes.
- [ ] Now, you might have noticed that the totals are not updating when you click dots. Time to change that!
- [ ] Take a look at the `updateCounts` function. It contains an object with three properties to track the number of dots that are green, blue, or invisible.
- [ ] You're going to need to get all of the dots in one array. If you're not sure how to do that, look in the `start` function:
- [ ] It gets the `board` element using its class, selecting the first element of the resulting array.
- [ ] It takes the `.children` property which is an array containing all the elements _inside_ the board.
- [ ] Once you have an array with all the dots in it, you're going to need to loop through them using a `for` loop.
- [ ] Check each element in the array using `classList.contains` to see if it has the 'blue' class. If it does, increase the `totals.blue` property by one.
- [ ] Repeat that check for 'green' and 'invisible' classes.
- [ ] Reload the browser periodically to see if your changes are working. If they are, the counts to the right of the board should start going up!
- [ ] Push your commits back to GitHub with `git push`, and post a link to your copy of `game.js` in the Waffle comments below.
- [ ] Think that wasn't much of a game? Next sprint, we take all these skills and build something for real!
Answers:
username_1: dom-events-and-classes/game.js
Status: Issue closed
|
GMOD/jbrowse-components | 824991145 | Title: Errors on serverSideRenderedBlock not handled properly
Question:
username_0: If a block rendering error occurs
```
diff --git a/plugins/linear-genome-view/src/BaseLinearDisplay/models/serverSideRenderedBlock.ts b/plugins/linear-genome-view/src/BaseLinearDisplay/models/serverSideRenderedBlock.ts
index 79042b986..8a92c2bdf 100644
--- a/plugins/linear-genome-view/src/BaseLinearDisplay/models/serverSideRenderedBlock.ts
+++ b/plugins/linear-genome-view/src/BaseLinearDisplay/models/serverSideRenderedBlock.ts
@@ -202,6 +202,8 @@ function renderBlockData(self: Instance<BlockStateModel>) {
)
}
+ throw new Error('this is a failure that happens on any block render')
+
```
Then the error is not handled properly, it goes into renderBlockEffect with props undefined and gives an error on destructuring an undefined object
Answers:
username_0: Note that this may go back to this commit b7e1085e3
This is may be a misunderstanding of how makeAbortableReaction error handling works, it is unexpected that it goes into the renderBlockEffect code
username_0: This was quick-fixed, I suggest leaving this open to get a proper integration test for the error case...retitled to reflect
username_0: Fixed 875d0bd
Status: Issue closed
|
airshipit/treasuremap | 925182195 | Title: Sub-clusters Deployment: Fix Server/Rack Label Configuration for Multi-Tenant
Question:
username_0: The BMH labels used by ViNO and SIP to identify the server and rack where a VM will be located are inconsistent. Need to make the label setting consistent across the multi-tenant site deployment.
**Expected behavior**
The chain of events for providing server/rack labels to VMs created by ViNO and consumed by SIP is:
1) The hostgenerator for the target cluster will need to specify label values for the BMHs. (https://review.opendev.org/c/airship/airshipctl/+/797130, https://review.opendev.org/c/airship/treasuremap/+/797132
2) The synclabeller operator is configured to sync the BMH labels to the K8s nodes
3) ViNO copies these labels to BHMs for VMs it creates on corresponding k8s nodes. This is configured in ViNO CR.
4) SIP scheduling matches to these labels per SIPCluster CR.
Need to align to use the following BMH labels to deliver multi-tenant type sub-cluster:
- `airshipit.org/rack`
- `airshipit.org/server`
**Environment**
- Treasuremap version: https://github.com/airshipit/treasuremap/commit/12a6940214d4047362d11f8ed1305ce9875a5d57
- Treasuremp site type: multi-tenant
Answers:
username_1: I think I already put in the needed changes for this last week, but they haven't merged:
https://review.opendev.org/c/airship/treasuremap/+/797132
https://review.opendev.org/c/airship/airshipctl/+/797130
Status: Issue closed
username_0: Closing per merged patchsets |
thinkjs/thinkjs | 263350125 | Title: Logic层校验问题的一些疑惑和建议
Question:
username_0: ## 如题
### ENV
OS Platform:
Node.js Version:8.4.0
ThinkJS Version:3.0.0
### code
```js
this.allowMethods = 'post';
// // 定义规则
let rules = {
userName: {
aliasName: '用户名',
string: true, // 字段类型为 String 类型
required: true // 字段必填
}
};
```
### error message
```
// your error message here
```
### more description
- 上面的`allowMethods` 是无效的,必须要使用`this.rules`.
- 如果使用`this.rules`的话,当请求是不合法时 "msg": "METHOD_NOT_ALLOWED",目前好像不能自定义。
- 所有`rule`里面多条验证规则的error都是一样的,统一返回`config`下`validateDefaultErrno`,如何进行细节区分?
Answers:
username_0: 目前的做法是在对应的`fail`中根据返回的`msg`重写.
username_1: 一般来说所有的校验错误都是应该是相同类型的,这样前端处理起来也比较方便。而且不知你以什么维度去区分错误号,如果是根据不同的校验规则不同的错误号的话,那多个校验规则报错你如何处理?当然如果萤是要区分的话可以自行调用 `this.validate(rules)` 方法后根据 `this.validateErrors` 的返回来做判断,然后自行调用 `this.fails(validateErrno, errMsg, this.validateErrors)` 返回结果。
username_2: ping @username_0
username_2: Close the issue for no responding.
username_2: Close the issue for no responding.
Status: Issue closed
|
JazzCore/python-pdfkit | 126997466 | Title: Pass custom request headers for URL in pdfkit.
Question:
username_0: I am trying to convert a URL to pdf using pdfkit in python as follows.
import pdfkit
pdfkit.from_url(url, file_path)
I wanted to know is there some way to pass custom request headers with this URL such as X-Proxy-REMOTE-USER to something.
Regards
Rohit
Answers:
username_1: maybe this can help [#53](https://github.com/username_2/python-pdfkit/issues/53)
Status: Issue closed
username_2: Now there are, pass it in options like this:
'custom-header' : [
('Accept-Encoding', 'gzip')
] |
HumanBrainProject/hbp-validation-framework | 332290011 | Title: direct access to result detail in validation framework is broken
Question:
username_0: When accessing directly to a test result using the url, the user is stuck on the home page.
ex: https://collab.humanbrainproject.eu/#/collab/5165/nav/42859?state=result.38517108-87cf-44d8-813b-8902dba2ab2f |
clearlinux/distribution | 320126124 | Title: Wake from suspend failing with 4.16.6-563.native
Question:
username_0: My Dell XPS is no longer able to successfully resume. I see the power button light up, but the screen stays off.
From "journalctl -b -1" I see the following new errors:
May 03 17:46:17 rustic kernel: [drm:intel_power_domains_verify_state] *ERROR* power well DDI A/E IO power well state mismatch (refcount 1/enabled 0)
May 03 17:46:17 rustic kernel: [drm:intel_power_domains_verify_state] *ERROR* power well DDI B IO power well state mismatch (refcount 1/enabled 0)
May 03 17:46:17 rustic kernel: [drm:intel_power_domains_verify_state] *ERROR* power well DDI C IO power well state mismatch (refcount 1/enabled 0)
May 03 17:46:17 rustic kernel: [drm:intel_power_domains_verify_state] *ERROR* power well DDI D IO power well state mismatch (refcount 1/enabled 0)
May 03 17:46:18 rustic kernel: [drm:intel_dp_start_link_train] *ERROR* [CONNECTOR:59:eDP-1] Link Training failed at link rate = 540000, lane count = 4
May 03 17:46:28 rustic kernel: [drm:drm_atomic_helper_wait_for_flip_done] *ERROR* [CRTC:37:pipe A] flip_done timed out
May 03 17:46:38 rustic kernel: [drm:drm_atomic_helper_wait_for_dependencies] *ERROR* [CRTC:37:pipe A] flip_done timed out
May 03 17:46:48 rustic kernel: [drm:drm_atomic_helper_wait_for_dependencies] *ERROR* [CONNECTOR:59:eDP-1] flip_done timed out
May 03 17:46:59 rustic kernel: [drm:drm_atomic_helper_wait_for_flip_done] *ERROR* [CRTC:37:pipe A] flip_done timed out
May 03 17:47:09 rustic kernel: [drm:drm_atomic_helper_wait_for_dependencies] *ERROR* [CRTC:37:pipe A] flip_done timed out
Answers:
username_1: Does it attempt to resume and crash or just sit there stuck in suspend until you hard press the power button for 10 sec? (because that's what happens on my t470 until I run linux-stable vanilla kernels. FWIW 4.16.7 resumes pretty well for me.)
username_0: It looks like it turns off (all lights are off and screen is completely black) and then I press the power button to see the backlight for the button come on as well as the backlight behind the keyboard. The screen lights up partially (only noticeable in a dark room), and then nothing.
Looking through the logs it appears the system never really shutdown.
username_2: @miguelinux @bwarden
username_0: I had some spare cycles to debug this further. The culprit is the try_load_dmc.patch. I can rebuild the 4.16.7-567 linux package and if I remove this one patch then the issue goes away.
This patch was introduced since 4.16.5, which was the last working kernel I had before the upgrade that introduced the bug.
commit 367cfd96b0a277e062d5c841091c9a14cb352cb7
Author: <NAME> <<EMAIL>>
Date: Sat Apr 28 16:48:24 2018 +0000
allow for late loading of the DMC
config | 2 +-
linux.spec | 6 ++++--
try_load_dmc.patch | 12 ++++++++++++
3 files changed, 17 insertions(+), 3 deletions(-)
username_0: Another interesting fact... the offending patch just attempts to load the dmc firmware in the i915_dmc_info debugfs call, so I tried moving the firmware and then booting the offending kernel again. I can see in dmsg both the original firmware load fail and the additional attempt when the i915_dmc_info file is read, but interestingly the original failure to suspend still happens.
So... just attempting to intel_csr_ucode_init again is all it takes to trigger this bug.
Status: Issue closed
username_0: Release 22860 stopped applying the try_load_dmc.patch and as a result i915 based systems can successfully suspend |
quintel/etmodel | 131301148 | Title: New dataset information page does not work as expected
Question:
username_0: <img width="363" alt="screen shot 2016-02-04 at 11 13 42" src="https://cloud.githubusercontent.com/assets/4669028/12812012/62a2af68-cb30-11e5-892c-5e196f3f87ac.png">
- If I hit the `Start a new scenario from this country` button, nothing happens
- If I hit the `Go back to the model` link, it opens a (new?) scenario
- (minor) I think the first `Go back to the model` link is redundant
Answers:
username_1: I got that link for free using the 'pages' layout. Guess I could use a 'blank' layout considering the empty-state of the page - as of now.
Status: Issue closed
|
jtesta/ssh-audit | 998710818 | Title: Missing Debian guides, not clear enough which Ubuntu version to use for which Debian version
Question:
username_0: Hi,
Been upgrading my infra from CentOS 7 to Debian 10 and 11 and been using ssh-audit to make our ssh more secure, but there is not dedicated guides for Debian, I know that Ubuntu and Debian are almost interchangeable
Usually Ubuntu and Debian version does not have the same version
Debian 10 : 7.9 (or 8.4 in backports)
Debian 11: 8.4
Ubuntu 20.04 : 8.2
Ubuntu 18.04 : 7.6
I suppose that guides for mayor versions are almost the same, but would be great to get Debian specific guides, or at least a version match for Debian in the Ubuntu guides
Cheers
Alex
Answers:
username_1: Ask and ye shall receive! Guides have been added for Debian 10 & 11: https://www.ssh-audit.com/hardening_guides.html
Please let me know if you run into any trouble with them. Thanks!
Status: Issue closed
username_0: Hi, thanks, just tried both guide in both instances and just found this,
Debian 11 :
`# algorithm recommendations (for OpenSSH 8.4)
(rec) +diffie-hellman-group14-sha256 -- kex algorithm to append`
Debian 10 (with OpenSSH 8.4 from backports) :
`# algorithm recommendations (for OpenSSH 8.4)
(rec) +diffie-hellman-group14-sha256 -- kex algorithm to append`
Can we add that kex algorithm to the guide so we don't end up with a recommendation after all the steps of the guide? (we are supposed to get every thing fixed and in the best config after the guide after all)
Recommending installing openssh 8.4 from backports for Debian 10 will be too much?
Alex
username_1: The issue of diffie-hellman-group14-sha256 coming up as a
recommendation is directly related to issue #117. Hopefully that'll be
fixed soon.
I'd recommend that you keep that key exchange algorithm disabled, since
it only offers 2048-bit/112-bit of security strength.
--
<NAME>
Founder & Principal Security Consultant
Positron Security |
LibraryOfCongress/concordia | 993293505 | Title: Investigate sporadic 502 errors loading frontend pages
Question:
username_0: **What behavior did you observe? Please describe the bug**
Volunteers and the community managers have observed a recent rise in 502-Bad Gateway errors when loading site pages. This has been seen on transcription pages as well as campaign landing pages and the About page. |
sanhee/codesquad-cocoa-java | 750658404 | Title: 3주차 미션 리뷰입니다.
Question:
username_0: 안녕하세요 노을, 소니입니다.
3주차 미션에 대해 간단한 피드백 남기니 참고해주세요 :)
1. 하나의 클래스에 모든 로직을 다 작성하는 건 가독성과 코드의 재사용성면에서 좋지 않습니다. 역할별로 클래스르 구분하고 필요한 클래스를 import 해서 사용하는 방식으로 변경해보세요
2. 사용하지 않는 코드는 주석으로 해놓지 말고 삭제해주세요.
3. 모든 메서드가 static 메서드인데 이렇게 구현할 경우 어떤 장단점이 있는지 공부해보세요.
4. 메서드명은 동사로 작성하고 카멜케이스로 작성합니다. 이 규칙이 지켜지지 않은 메서드가 보이네요. 또한 모든 네이밍은 약어를 최대한 사용하지 않고 작성하는게 좋습니다. 약어는 본인만 알아볼 수 있는 경우가 많아 다른 사람이 이해하기 어려울 수 있습니다. (ex. chHourToKor)
5. Enum에 대해 공부해보시고 구현 로직에 적용할만한 부분이 있는지 생각해보세요. 🧐
Answers:
username_1: 안녕하세요 소님 피드백 감사합니다.
한가지 질문이 있는데 한글시계로 구현을 할 때, 2차원 배열이 아닌 ArrayList로 큰 차이나 문제가 없는 걸까요??
Status: Issue closed
username_1: 안녕하세요 노을, 소니입니다.
3주차 미션에 대해 간단한 피드백 남기니 참고해주세요 :)
1. 하나의 클래스에 모든 로직을 다 작성하는 건 가독성과 코드의 재사용성면에서 좋지 않습니다. 역할별로 클래스르 구분하고 필요한 클래스를 import 해서 사용하는 방식으로 변경해보세요
2. 사용하지 않는 코드는 주석으로 해놓지 말고 삭제해주세요.
3. 모든 메서드가 static 메서드인데 이렇게 구현할 경우 어떤 장단점이 있는지 공부해보세요.
4. 메서드명은 동사로 작성하고 카멜케이스로 작성합니다. 이 규칙이 지켜지지 않은 메서드가 보이네요. 또한 모든 네이밍은 약어를 최대한 사용하지 않고 작성하는게 좋습니다. 약어는 본인만 알아볼 수 있는 경우가 많아 다른 사람이 이해하기 어려울 수 있습니다. (ex. chHourToKor)
5. Enum에 대해 공부해보시고 구현 로직에 적용할만한 부분이 있는지 생각해보세요. 🧐
Status: Issue closed
|
uber/petastorm | 433254845 | Title: Performance benchmarks against HDF5
Question:
username_0: Hi all, great work and glad to see progress here. Is there anywhere a comparison between petastrom and HDF5/bcolz/zarr ?
Answers:
username_1: Hi. These formats look interesting. We never evaluated their performance but these do surely look promissing. It would be interesting to try. We are playing internally with making Parquet just another backend implementations alternatives for Petastorm. We can consider making bcolz or zarr another backend alternative for Petastorm.
username_0: I think for cases from machine learning, algorithmic etc. that would heavily make sense as i currently see parquet mostly for relational data ... maybe i am wrong in this assumption.
username_2: And I would like to see the comparison with lmdb. |
gkovacs/pdfocr | 85497802 | Title: Support for black and white, and grayscale pdf files
Question:
username_0: Currently, pdfocr converts b/w and grayscale pdf formats to color ppm formats and runs tesseracts on them. Therefore the output file size of pdfocr is about 10 to 100 times bigger than the the input file in case of b/w pdf files.
But there is a method to reduce the file size.
Line 331 on the newest version of pdfocr says,
sh "pdftoppm -r 300 #{shell_escape(basefn)}.pdf >#{shell_escape(basefn)}.ppm"
If this line is replaced with below for b/w format,
sh "gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=pbmraw -r300 -sOutputFile=#{shell_escape(basefn)}.ppm #{shell_escape(basefn)}.pdf"
or if the line is replaced with below for grayscale format,
sh "gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=pgmraw -r300 -sOutputFile=#{shell_escape(basefn)}.ppm #{shell_escape(basefn)}.pdf"
Then, the ppm file format uses b/w or grayscale and therefore the output file of pdfocr is much bigger than the current ones.
But the problem of this is that it always converts the ppm file as b/w or grayscale.
So it would be nice if you implement an additional option such as -gray or -mono to pdfocr, and separate commands for ppm conversion according to the options.
*ps: pdftoppm also supports -mono and -gray option, but -mono option of pdftoppm reduces the image quality for some reason. So I avoided using -mono option on pdftoppm command. I used gs command instead to avoid the problem.
Answers:
username_1: Why not extract the raw image as it's encoded in the PDF? The same file format at the same color depth at the same resolution would be ideal. identify <file.pdf> -verbose has the PDF page image type, resolution and geometry information.
As of ver 0.50 pdftoppm produces broken pdfs when mono is used. Maybe it should be abandoned? |
jprichardson/node-fs-extra | 319744964 | Title: Problems when dealing with invalidly-encoded filenames
Question:
username_0: <!-- Please fill out the following information if it applies to your issue: -->
- **Operating System:** Debian 9
- **Node.js version:** 8.9.3
- **`fs-extra` version:** 5.0.0
Hi there. I ran into some cases where `remove()` was unable to remove a directory due to filename encoding issues. I believe there are similar issues using `empty`, `copy`, and `move` operations (and their sync counterparts - basically anything that relies on `fs.readdir` / `fs.readdirSync`).
My issue arose when trying to `fs.remove()` some directories that were created from an unzip operation. During `remove`s / `rimraf`'s tree walk, some of the returned directories seemed not to exist (although they did), causing the final `unlink` operation to fail (since it wasn't actually successfully emptied).
It seems that, in general, names on a file system are just byte sequences, which are not guaranteed to represent fully valid strings. This causes the bytes-> string -> bytes operation, that happens when listing and then operating on items in a directory using Node, to not always produce the same file name that it read.
This encoding problem has been a known Node issue for a while, which is why an option was added to return `Buffer`s from `fs.readdir`. My suggestion is to update the affected methods to use this `Buffer` option. I'm happy to work on a PR, but I wanted to at least get some feedback and discuss the issue before diving in.
Here are a couple Node issues relating to the file name encoding problem:
https://github.com/nodejs/node-v0.x-archive/issues/2387
https://github.com/nodejs/node/pull/5616
Thanks!
Answers:
username_1: My first impulse is PR welcome. I'm assuming this wouldn't affect the external API, but I'm wondering how we'd handle the `filter` for non-UTF8 filenames. Thoughts?
username_0: Yes, I was thinking it wouldn't affect the API, but I didn't know about the `filter` option, so thank you for the heads up.
I think it's possible for `copy` to read names with the `Buffer` option, and to convert these to strings before passing them to the `filter` callback, to keep the API the same, potentially with the new Buffer-variety forms of the names as additional arguments. This, of course, depends on us being able to reliably decode the Buffers into string names the same way that `fs` is doing it currently.
From [this](https://github.com/nodejs/node/blob/ce4c8c823ca03d065fb880c50789ba136722c8e7/src/node_file.cc#L1218) it looks like Node simply uses utf-8 encoding by default. This seems strange since I think Windows stores file names in UTF-16 / UCS-2 encoding, but I just checked on Windows and the Buffers are indeed utf-8 encoded.
username_2: @username_0 @username_1, bringing this issue back up, because we face the same problem with `fs.cp()` in Node.js.
I've been working on a port of Node.js' path methods that work on Buffers:
https://github.com/username_2/path-buffer
I've make an effort to detect `utf8` vs., `utf16`, so that the appropriate separator is added or removed by methods like `join` and `dirname`, but I'm not an expert at string encodings, so it would be good to have someone who's bumped into the issue confirm the logic is sound. |
schramm-famm/sandbox | 504184854 | Title: Update the UI to show the current cursor position of other active clients
Question:
username_0: The server would need to track the cursor position of each client somehow, so it would need to be able to identify which client is which.
Might depend on #5 or some other implementation of identifying clients (cookies? accounts?). |
8080labs/ppscore | 755236200 | Title: Data preprocessing and information leakage
Question:
username_0: Hello, before anything thanks for the package, it is very useful and the overall approach is innovative and generates a lot of efficiency. I have a comment regarding the "state" of the data to run the pps analysis on, it seems (I may be mistaken) that any transformation to the data (standardization for example) will lead to large data leakage into the Kfolds cross-validations. Is it correct? The module could use sklearn´s pipelining and standard transforms to possibly increase the information generated, would this be of value to the module?
Answers:
username_1: Hi Hector, thank you for reaching out and for sharing your suggestions.
I agree that transformations to the data can lead to data leakage.
What is your proposal for adding sklearn pipelining and standard transforms to ppscore?
username_0: Let me try over the week replacing the models, regressor/classifier, with a pipeline model including one standardization step. If this works it can be made a kwarg in predictors.
username_1: I would like to protect your time, so before you start implementing the proposal, please provide a concept (aka some examples) for the API first. This way, we can first discuss the new API (aka user experience) and when we agree on a suitable API, we can talk about the implementation.
username_0: Yes, Florian, minum changes if it works, could be a keyword argument addition for a list of transformations in the predictors function that reaches the VALID_CALCULATIONS dictionaries and replaces tree.DecissionTree*() with a a pipeline preprocessing using the input list of transformations. According to this:
https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html#sklearn.pipeline.make_pipeline
username_1: I think I got it - can you still please give 1 detailed example with the actual syntax. I would love to have a look at how the total code would look like
username_0: As example only, in calculation:
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
#Change the models to:
VALID_CALCULATIONS = {
"regression": {
"type": "regression",
"is_valid_score": True,
"model_score": TO_BE_CALCULATED,
"baseline_score": TO_BE_CALCULATED,
"ppscore": TO_BE_CALCULATED,
"metric_name": "mean absolute error",
"metric_key": "neg_mean_absolute_error",
"model": Pipeline([('scaler', StandardScaler()), ('tree', tree.DecisionTreeRegressor())]),
"score_normalizer": _mae_normalizer,
},
"classification": {
"type": "classification",
"is_valid_score": True,
"model_score": TO_BE_CALCULATED,
"baseline_score": TO_BE_CALCULATED,
"ppscore": TO_BE_CALCULATED,
"metric_name": "weighted F1",
"metric_key": "f1_weighted",
"model": Pipeline([('scaler', StandardScaler()), ('tree', tree.DecisionTreeClassifier())]),
"score_normalizer": _f1_normalizer,
},
This provides slightly different results for some data sets. The idea is to enable "predictors" function to alter the model keys to a constructed pipeline, whose constructor is a little bit awkward as it is a tuple (name, transformer). The pipeline should take care of non-leaking cross validation scores.
A call to predictors would look like:
transformers = [StandardScaler(), MinMaxScaler()]
predictors(df, 'column', transformers = transformers)
Here predictors (or other function) would have to create the pipeline list.
username_1: Hi Hector, thank you for the example, and I like the transformers API
When I thought about this proposal, I was unsure which problem this should solve exactly? What is the scenario that the user is in and why does the user use ppscore in that scenario?
When did you have this scenario yourself the last time? How did you solve it then?
Maybe you can explain this a little bit more - this would help me in my understanding
username_0: Hello Florian,
the use case is having feature data that may exhibit outliers, some skewed distribution or any other anomaly that can be improved by transformation instead of dropping the offenders. In this specific case I was looking for best predictors among thousands of time series with several anomalies and had to run transformations, I transformed them and the run PPS, contaminating the internal cross validation. I manually modified the cv PPS uses to time series split and pipelined the data. Users may also want to minmax scale the data, or perform more complex transformations that they could pipeline if they are looking for quick comparisons. There were changes in PPS score ranking with and without the transformations that may be significant.
As a sideline, the cv object could also be defined as kwarg in the predictors functions to accept other splits, stratified k folds comes to mind for very unbalanced datasets.
These are the two operations I had to manually perform in this case, kwarging transformations and cv object can automate it and make the PPS more flexible.
This can generate quick checks, PPS_standard is pps with the pipeline added:
`import PPS as pps
import PPS_standard as pps_s
import pandas as pd
import numpy as np
import sklearn.datasets as ds
diabetes = ds.load_diabetes()
df= pd.DataFrame(data= np.c_[diabetes['data'], diabetes['target']],
columns= diabetes['feature_names'] + ['target'])
print(pps_s.predictors(df, y='target')[['x', 'ppscore']].head())
print(pps.predictors(df, y='target')[['x', 'ppscore']].head())`
username_1: Thank you for the explanation.
Wouldn't it make more sense then to just pipe the crossvalidation object into ppscore?
Because in the end you are concerned about an invalid crossvalidation.
Did you generate a cross-validation object at the end of your pipeline?
username_0: Hello Florian, sklearn pipe requires the last element to be the estimator, which in PPS will be the automated choice of regression or classifier, so I have not found any other way to feed it in but to overwrite the whole model with the pipe that has the original estimator as last element.
The whole pipeline could be and input to PPS, the user would have to decide regression or classification in this case, or complicate the logic of _determine_case_and_prepare_df so that it selects, from multiple models that are either classifiers or regressors. This would allow on the other hand to compare PPS using multiple different models, not only tree.
username_1: Hi Hector, I wish you a good start to the new year and sorry for the late reply - I have been on vacation.
Thank you for clarifying the point that the model is the last step for the crossvalidation object and thus it is not possible to pass the full cv object.
If you want you can go ahead and add a PR
username_0: Happy New Year Florian. I will open the PR and propose the changes.
username_0: Hello Florian,
The changes I made require the model (the tree regressor or classifier) within VALID_CALCULATIONS to be re-initialized every time the API is called to include the pipeline object.
This brings no noticeable computational cost but it cannot pass this test:
# check random_seed
assert pps.score(df, "x", "y", random_seed=1) == pps.score(
df, "x", "y", random_seed=1
)
As the model object at the 'model' entry of the dictionary is a different instance of a model with the same parameters. The contents are the same in every other entry of the dict, the model is not and fails to assert. This test (and subsequent result comparisons) could be modified to compare the dict excluding the 'model' entry, just as a suggestion.
username_1: Thank you for the heads-up. We can easily adjust that test |
department-of-veterans-affairs/va.gov-team | 601714664 | Title: [CONTENT] Review and determine best strategy for writing and cloning content over future VISNs [Content]
Question:
username_0: ## User Story or Problem Statement
As a Content writer, I must have a full understanding of areas of writing, so I can better understand the LOE and approach of using the data for automation and input.
## Goal
_Perform initial review of future SYSTEM writing _
## Objectives or Key Results this is meant to further
- _Increase overall quality experience for veteran's accessibility_
- _Assist and reduce time for complete accessibility and 508 compliance checks_
## Resources - Tools - Documentation
## Tasks
_Complete review of VISN-19, Systems and clinics and estimate ROM:_
- [ ] Rocky Mountain VISN Review
- [ ] Systems that will be reivewed
- [ ] Clinics and level of data available
- [ ] Determine approach in data collection and level and process to proceed.
## Acceptance Criteria
* Rocky Mountain VISN Review
* Systems that will be reivewed
* Clinics and level of data available
* Determine approach in data collection and level and process to proceed.<issue_closed>
Status: Issue closed |
codevise/pageflow | 298891377 | Title: Background media playback on mobile browsers and Safari
Question:
username_0: Mobile browsers and current versions of Safari do not allow autoplaying video elements unless they are muted. Using WebAudio API, audio files can be autoplayed after a first user interaction has unlocked the audio context.
Previous behavior on mobile browsers:
* We mute page background video loops to allow autoplay.
* Atmo is disabled since the current implementation uses one audio elements per audio file which would have to be unlocked by user interaction individually to be played.
* Autoplay of media on video and audio pages is disabled. User can start playback via play button.
Since Safari adopted autoplay restrictions on desktop, background video was basically broken. #949 does most of the work required to improve this situation:
* Replaces previous audio library with Howler, which supports playing all audios via a single audio context that can be unlocked by a single user interaction.
* Plays sound of muted background videos via this audio context once it is unlocked.
* No longer needs to disable atmo on mobile devices.
* Displays a button to unmute background media which provides the required user interaction.
Remaining open questions:
1. Should there be logic to only display the unmute button if the entry actually contains either background videos or pages with atmo audio?
Detecting pages like this might be tricky since (a) background videos might not have an audio track to begin with an (b) figuring out if a page has an atmo audio file requires actually looking up the file since audio file ids in the page configuration are allowed to point to files which have been removed from the entry.
2. Should clicking the "Let's go" button inside the multimedia alert already unmute background media?
There still might be value in letting the user view the entry with background media disabled. On the other hand, having to click an unmute button after just having been alerted that there will be sound might be confusing.
Answers:
username_0: @tilsammans We are finally getting around to tackling this issue. Happy to hear your thoughts.
username_1: It look's like the problem also occurs on Desktop Browsers:

Tested with Chromium 73.0.3683.75.
username_0: @username_1 Far as I know, the console warning does not indicate a problem, but rather happens when we try to [resume the the audio context](https://github.com/codevise/pageflow/blob/396f1ad81aa4f55cdd1f276c9d4e2f1b5ea033d0/app/assets/javascripts/pageflow/media_player/volume_fading/web_audio.js#L24) since there is now way of knowing whether we come from a user gesture. The Hound approach described above and implemented in #949 still has synchronicity issues when playing uncached videos and is therefore blocked. The current state of autoplay support is outlined in [this doc file]( https://github.com/codevise/pageflow/blob/master/doc/media_autoplay_behavior.md).
Status: Issue closed
|
demokratie-live/democracy-client | 978129295 | Title: 🗣[Onboarding] Erklärfilm im Tutorial zeigen
Question:
username_0: Mehrere Teammitglieder berichten, dass Freunde / Bekannte, denen sie die App gezeigt haben, einen Betreuer zur Benutzung der App brauchten, sowohl was die Bedienbarkeit, das Auffinden des Wahl-O-Meters sowie die Interpretation der Ergebnisse anging.
Ein Vorschlag, einen besseren Übergang vom Download zum sinnvollen Nutzen der App zu erreichen, ist das Zeigen des WOM-Erklärfilm im Tutorial |
testing-library/jest-dom | 647434800 | Title: Readme file for Usage
Question:
username_0: I used create-react-app with typescript template.
I went to Readme, to usage and pasted this code: ` import '@testing-library/jest-dom/' `

and got err:

When I changed that code to `import '@testing-library/jest-dom/extend-expect'` (added "extend-expect') it works

Maybe we update the docs??
Answers:
username_1: Maybe what's happening to you is that you are adding the import statement to the file configured in jest to be loaded like this:
```json
setupFiles: ['./jest/setup.js', 'jest-date-mock'],
```
when in reality you should load it in the file that's configured like this in the jest config:
```json
setupFilesAfterEnv: ['./jest/setupTestFramework.js'],
```
And yes, if you confirm that that solves your issue, and you find that there's a documentation indicating it the other way, then we should updated that documentation. At a glance I think the jest documentation we link to in our README is correct, as it links to the `setupFilesAfterEnv` config setting.
username_0: Thanks for reply <3 . I am not adding anything. I create default "npx create-react-app . " and "npx create-react-app . --template typescript" command and that is it. Both commands auto generate/add same line inside setupTests.js/ts with "extend-expect" and the end:

But when I add like in docs, without it, it breaks. I don't even know where to add those commands above. Never mind it works with default generated code. But when I deleted that file, and later I wanted it back, I went to docs and was stuck... We can close this.
username_1: Hmmm, no. Don't close this. It should work the same with both `/extend-expect` and without it. Not sure why that's not the case.
username_2: Having the same issue:
`import '@testing-library/jest-dom';` removes the syntax errors in the file, while
`import '@testing-library/jest-dom/extend-expect';` makes the test run fine without type errors
username_1: If you use a recent version of `create-react-app`, you do not need to follow this lib's README. `jest-dom` already comes preinstalled and configured in your generated app.
Bottom line is: can you confirm if in the situations you're experiencing this the version of `jest-dom` being used is v5.x.x? Or is it lower? If you can confirm this is happening in v5 please provide a minimal sample repo that I can clone and investigate.
username_3: I am having similar issues with Vue, although neither of these solutions are working for me.
No matter which one I add, I constantly get errors like:
```
TS2339: Property 'toBeVisible' does not exist on type 'HTMLElement'
```
Both in the IDE and when I try to run a simple test.
username_1: @username_3 can you provide a minimal repo/sandbox where we can reproduce this? It sounds weird because `toBeVisible` is meant to be called on whatever `expect(...)` returns. Under what circumstances does `expect()` returns a `HTMLElement`?
username_3: Hi @username_1, and thanks for the response.
I don't have a repo or sandbox at the moment, as this is part of a larger project and I "just" wanted to give `testing-library` a go to see if it's useful for our purposes, but I have spent the entire day so far just trying to get one test to even run because of TypeScript.
The code looks like:
```
import { render } from '@testing-library/vue';
import Component from '@/components/MyComponent/MyComponent.vue';
test('MyComponent', async () => {
const { getByTestId } = render(Component);
expect(getByTestId('test').toBeVisible());
});
```
And this is not only showing up as an error when it runs, but the IDE (PHPStorm) also shows it as an error.
I am aware that it could be some other project setting interferring and causing this, but I am at a total loss as to what it might be.
username_1: Indeed, you are using it wrong:
See how you are calling it (I added spaces to make my point more clear)
```js
expect( getByTestId('test').toBeVisible() );
```
How you should call it:
```js
expect( getByTestId('test') ).toBeVisible();
```
See the difference?
username_3: @username_1 Oh for the love of...I knew it was probably something silly and stupid, but not **THAT** silly and stupid! I was staring at it so long that I simply didn't see it.
I will try this all again later. Thank you very much for your responses and help, much appreciated.
username_1: We've all done something as silly as this at some point so don't worry. One thing that would've helped you in this situation is if you were using TypeScript, which would've flagged this error earlier. Even as you typed, if your editor supports it.
username_1: I'm going to go ahead and close this issue. From all the discussion and my recent tries, the original problem reported here is not an issue in our latest releases. We can always continue the discussion and even reopen it if needed.
Status: Issue closed
|
adobe/helix-pages | 677545379 | Title: Rollback/Disable #413 (cache in production)?
Question:
username_0: #413 caches responses in the inner CDN I an outer CDN is making the request.
As we don't have an automated invalidation mechanism yet this means that some things are cached very stubbornly and you need to flush both inner and outer CDN for changes to become effective.
To make matters wors^H even more interesting, the cache is varied by XFH, so that a given URL might be cached as 200 *and* 404 response.
As I've been burned by this (and as @davidnuescheler made sure to share his pain), I suggest we temporarily roll back this feature (after the go-live) until @username_1's solution for invalidating Helix Pages upon updates from Google, MS and GitHub is in place.
This will create more load for us, especially for tracking URLs (#402), but that would be only a temporary issue.
Status: Issue closed
Answers:
username_0: :tada: This issue has been resolved in version 3.1.2 :tada:
The release is available on [GitHub release](https://github.com/adobe/helix-pages/releases/tag/v3.1.2)
Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket: |
FreeCodeCampOKC/fccokc_web | 246913978 | Title: Create a Contributors section
Question:
username_0: On the about us page, let's create a contributors section so we can recognize those that help us in creating and modifying this website.
It can look exactly like the organizers section.
Answers:
username_1: Solved by #13
Status: Issue closed
|
theoephraim/node-google-spreadsheet | 620494904 | Title: GoogleSpreadsheet is not a constructor
Question:
username_0: Hi Everyone,
I'm having an issue where my program doesn't recognize this line of code:
var doc = new GoogleSpreadsheet(<SpreadsheetID>);
I keep getting an error stating that "GoogleSpreadsheet" is not a constructor. Has anyone been able to resolve this before?
Answers:
username_0: Found I had a previous version installed!
Status: Issue closed
|
felangel/bloc | 715240173 | Title: Help with yielding results in parallel for list of indeterminate size.
Question:
username_0: I've seen a few questions here about doing things in parallel but they don't seem to apply my specific issue (or I don't know enough to know if they apply to my issue).
I have a page that needs to make an API calls and display data for a non-fixed/user-configurable amount of servers. My current method is to trigger a single event when the page is opened to make these API calls and yield the state with the new results added into my Map each time.
However, when I use a `for...in` loop to go over this list of variable length it's going to wait for the completion of one `yield*` before it moves on to the next server. If my servers happened to be ordered from fastest to slowest repose it works exactly as I want, but all items after a slow server have to wait for it before they get their chance.
My question is how can I execute my `yield*` for my streams in parallel without putting them in a `for...in` loop since that is not async?
Example that highlights what I'm doing:
```dart
Map serverData = {};
if (event is Load) {
final serverList = await getServers();
for (Server server in serverList) { <- Executes synchronously rather than asynchronously
yield* loadServer(server, serverData);
}
}
loadServer(Server server, Map serverData) async* {
List newData = await getData(server); <- Takes an unknown amount of time
serverData[server.id] = newData;
yield Success(serverData);
}
```
The only takeaway I got from looking at other answers was to maybe utilize an event that gets triggers for each server rather than doing it all in one event but honestly, I'm not sure how that would help me.
Thanks for any guidance that can be offered!
Answers:
username_1: Hi @username_0 👋
Thanks for opening an issue!
I would recommend adding a new event when each server is loaded rather than awaiting the response synchronously.
```dart
if (event is Load) {
final serverList = await getServers();
for (Server server in serverList) {
loadServer(server, serverData);
}
} else if (event is NewDataReceived) {
yield Success(Map.of(state.serverData)..addAll({event.serverId: event.data}));
}
void loadServer(Server server, Map serverData) {
getData(server).then((newData) => add(NewDataReceived(server.id, newData)));
}
```
Hope that helps 👍
username_0: I'll give this a try and report back, thank you for the quick response.
username_0: This worked exactly as advertised. Thank you so much!
Status: Issue closed
|
psu-libraries/psulib_blacklight | 365021627 | Title: What is included in the index that is searched for All Fields default search
Question:
username_0: 
Related to https://psu.app.box.com/notes/320774203905
Answers:
username_1: @username_2 could you pls make sure if we have a task for each keywords search field that needs to be indexed so that we can close this issue?
Status: Issue closed
|
sonata-project/SonataAdminBundle | 785400050 | Title: Custom Controller with SonataAdmin Forms
Question:
username_0: Sometimes we may just want to display the Form for some actions like:
- To generate PDF based on inputs
- To generate code based on inputs
- To redirect to any third-party page based on inputs
- Any custom action based on the post controller
For this purpose, just creating a CRUD controller looks overhead.
**One solution** is to create a custom controller with a form view but it comes with the caveats:
- Form design will not match with the SonataAdmin theme
- Cannot leverage the power of `FormMapper::add()` methods for form generation
- Other power of SonataAdmin
Is there a way to have a custom controller with the SonataAdmin form design?
This feature would make the SonataAdmin a killer bundle.
*Note that I am using Symfony 5.x*
Answers:
username_1: ```
public function buildForm(FormBuilderInterface $builder, array $options): void
{
$builder
->add('type', ...)
->add('value', ...)
;
}
```
Is pretty similar
Status: Issue closed
|
FirebaseExtended/reactfire | 783025483 | Title: ClaimsCheck throwing TypeError: Cannot read property 'claims' of undefined
Question:
username_0: ### Version info
<!-- What versions of the following libraries are you using? Note that your issue may already
be fixed in the latest versions. -->
**React:** ^16.14.0
**Firebase:** 8.2.1
**ReactFire:** 3.0.0-rc.0
### Steps to reproduce
<!-- Provide the steps needed to reproduce the issue given the above test case. -->
Seems to happen with any `AuthCheck` that uses the `requiredClaims` param or by calling `ClaimsCheck` directly
```
<AuthCheck fallback={<Home />} requiredClaims={{admin: true}}>
<Admin />
</AuthCheck>}
```
### Expected behavior
<!-- What is the expected behavior? -->
Users with custom claims should pass the `AuthCheck` or `ClaimsCheck` and fallback to the fallback argument
### Actual behavior
<!-- What is the actual behavior? -->
Receiving error: `TypeError: Cannot read property 'claims' of undefined`

Debugging in Chrome Dev tools indicates that the `useObservable` call may have an unresolved promise? I'm guessing this because `_useIdTokenResult` has a status of "loading" and `isComplete` is false.

Additionally, when I expand the user variable that I have blacked out I can see that it successfully is getting my authenticated user, meaning I can see the email, displayName, and custom claims when inspecting the JWT token.
Answers:
username_1: I'm running into the same issue.
The problem seems to occur due to `latest` always ending up defaulting to `config.initialData` in `useObservable` the first time it returns a value, but no config is sent in from `CheckClaims`;


I solved it locally by adding default values to `data` and `claims`, or by actually sending in a config with `initialData`.


username_2: Is there a workaround to this that doesn't involve changing the node_modules files if an update is upcoming? Suspense, preload user?
username_3: Some context on our slow response here:
`ClaimsCheck` was originally written when reactfire was Suspense/Concurrent mode-only, before the big [optional Suspense change](https://github.com/FirebaseExtended/reactfire/pull/255). In the old version of ReactFire, `data` would never be `undefined` because the component wouldn't get to that line until the data fetch completed (thanks to Suspense).
We used to totally ignore loading states because Suspense would handle them. Making `AuthCheck` and `ClaimsCheck` support non-Suspense/Concurrent mode requires a bit more thought around how we want the api for these components and components like these to work going forward.
For now, the easiest workaround is probably to copy the `AuthCheck` and/or `ClaimsCheck` [source](https://github.com/FirebaseExtended/reactfire/blob/main/src/auth.tsx) and modify it to your needs. That's basically what we did in the [withoutSuspense Auth example](https://github.com/FirebaseExtended/reactfire/blob/b4f22bc0a84729245db87861d5190a0483b19348/example/withoutSuspense/Auth.tsx#L10-L23).
In terms of how we may support this in the future, we'd love to hear thoughts from the community! One idea I've been thinking of is to make `AuthCheck` a hook instead. Something like this:
```jsx
function MyComponent() {
const { data: isValidUser, status } = useAuthCheck({
requiredClaims: { admin: true },
});
if (status === "loading") {
return <LoadingSpinner />;
}
if (isValidUser === true) {
return <AdminDashboard />;
} else {
return <YouDoNotHaveAccessToViewThisPage />;
}
}
```
But definitely open to other ideas!
username_3: Update: I'm working on the proposed hooks in #368, please leave feedback there
Status: Issue closed
|
jupyter/nbgrader | 485373617 | Title: Consider how to prevent access to feedback from other students
Question:
username_0: As I responded there, I'm not really sure of a way around this access issue, barring having better access controls (which we should have once hubshare exists...). But I'm opening an issue here to at least track that this is a problem, and maybe brainstorm ways to deal with it.
Answers:
username_0: I *think* with #1200 this should be ok now, since the hash is now computed from the student ID and the original notebook. You can try to override the student id if you want, but it won't work unless you actually have the original notebook submitted by that student.
username_0: Yes, it definitely adds an additional barrier to entry. And I guess you're right, if we added a secret token it would act basically like a shared secret between the student and the instructor. Other student's wouldn't be able to (easily) guess it and so wouldn't be able to read any feedback files without explicitly having the file, the exact time it was submitted, *and* the token (I guess they could try to brute force it, but if we make it long enough it shouldn't be an issue). So the main way someone's feedback could be leaked is if they explicitly gave the token away too.
I think that's a reasonable compromise for the moment.
If you're happy with that solution, would you like to have a go at implementing it? (I'm unfortunately not going to have much time to work on things for the next few weeks, though I might be able to next weekend)
username_1: hubshare seems pretty dead project to me or is there any other alternatives on jupyterlab side?
Anyway, I think using cache and random string seems like the best option to me, unless we really want to store additional information into the hash.
username_2: One interesting thing is that, if grading is a function of only the input state, knowing the notebook (plus the tests from your own notebook) is the same as knowing the grade. Except manual grading. And I doubt data protection authorities would be impressed by that argument. Anyway I don't consider this a top-priority problem, but maybe I should...
We can *almost* use the random string added to the filename when it's submitted as a token to retrieve it (either directly or part of the nb hash). But it's not stored in the student cache (per #1104, if it was students could more easily find their submitted assignments and modify them).
One _could_ add two random strings, one to be used when retrieving and one not saved...
username_2: Wait... the latest algorithm hashes the timestamp, right? Which is stored only in the cache, not in student dir. So student would have to share more than just the notebook in order to have someone else able to compete the hash... there's not a lot of entropy but it's not small either. So actually it's not that bad. (actually almost good!)
Thinking a bit more, if we process a timestamp.txt in the submission dir, it's not that far off to add a feedbacktoken.txt file there, too, which gets treated the same way as the timestamp, and since the name is obvious it's up to student to not reveal it.
username_3: This problem becomes significantly less if we move to an API-based exchange, with back-end database:
Such an exchange will already have to distinguish between students & instructors for releasing assignments & grading assignments, so the exchange will be able to return the feedback for any submission (as it will be able to match a piece of feedback to a specific submission), and restrict access to that feedback to only that student [plus any instructor on the course, I presume]
having said all that, we've not yet looked at implementing the feedback code in our local external exchange code yet :oops:
username_4: Hi, I think the grading should be deterministic, even if there are hidden tests. If the code between two notebooks is exactly the same, character for character, surely it would have the same outcome for all tests, hidden or not?
Something I suggested in the hackathon in Edinburgh was to add encryption to the submission and feedback mechanism (but I have not implemented yet). The way it could work is that the instructor includes a public key in the metadata of the notebook in the "generate" step, the student uses this key to encrypt the notebook before submitting it, and before that adds a feedback public key to the metadata of the notebook. The instructor decrypts the submitted notebook, grades and generate the feedback, this feedback is then encrypted with the public key of the student and send out.
The advantages of this is that
- students cannot see (decrypt) other student's feedback
- students cannot copy other student's work from the exchange
- students cannot just copy another student's submission and place it in the exchange: they would both have the same feedback public key and this could be detected.
Unfortunately I don't have time this week to have a look at this.
username_3: Actually, we're already using one.
The hard part is getting nbgrader to use it.
There _is_ a plan to push the code into the public domain, however our time keeps getting distracted by _people wanting things done_
username_4: I agree debugging would be made more difficult, but it would not be too bad: to publish assignment there is only a change in metadata in the notebook. One step is added when collecting, first we collect, say in a folder "collected_encrypted" then we decrypt into the "collected" folder and from there it follows the normal path, fully decrypted. THe feedback generation is the same as before but with the encryption step at the very end, so we still have a chance to look at the feedback generated before it is sent off to the students.
The key management is a more tricky issue. The encryption key for submission could be part of the assignment data in the nbgrader database. For the students key one could use the same for all submission and have it stored somewhere in the student's home directory. I think the same key could be used for all submissions.
username_4: One more thought about students accessing other student's feedback: if students can access other feedback, they still cannot know who the feedback was for. So as long as the number of students is large enough there would still be some anonymity. One could even argue for publishing all feedbacks, so students can learn from other people's mistakes or solutions...
username_1: API-like system would have many advantages, allow many additional features (for example realtime grading) and possibility to use already existing solutions for python-code grading (i.e. fighting against plagiarism etc), but that would require pretty much rewriting the whole thing from scratch (not quite, but not far off. I don't think that there is much to reuse. Even UI things should be moved to JupyterLab).
Encryption has it's positives and negatives. I currently like how instructors can debug the system quite easily by going over certain directories. Doing that at the same time with encryption is quite a time consuming task since you would have to make a solution for dealing with private/public keys at the same time (most likely using database entries).
How would you handle the public/private key storage? Store them in the database and provide API for students to receive their own public key?
Other than that it would be quite nice solution, since privacy and data security are quite lacking in the current solution.
username_1: Pretty much my point. nbgrader was not designed API-service in mind. In general I don't think it would be that complicated to implement (time consuming, most likely), but in the end that would lead into redoing the whole extension (especially when combined with the JupyterLab translation).
But in the end, I would say that it might be a good solution choice either way. For example, we are already using external grading server for autograding the notebooks, since it takes too much resources compared to normal instructor instance.
username_3: Again, not true.... the fix is to fork nbgrader, and modify `nbgrader/exchange/__init__.py` to pull in our external-exchange code.... the formgrader & assignments-list code all remains the same.
There is a piece of work to make the exchange service a `plugin` - which I believe was started, but may have fallen by the wayside.
The rest is pretty easy - it's just Python, and all you need to do is replicate the methods in the original exchange service & pass back data-structures in the right form
..... however, as @username_2 says, we're drifting off target here
username_2: Our setup using kubernetes+actual users has no known security problems using current nbgrader (except the minor entropy consideration above). In a certain way I actually quite like the current exchange for its elegant use of standards.
If there is a setup where every user has the same filesystem user ID, then yes, everything above is not true. It's nice to discuss encryption in theory, but if you ask me that's trying a bit too much to work around the problem. Even if there is encryption, as long as student has same uid as instructors, then students can also delete files, alter the released files, release their own assignments, and so on (removing formgrader/config is security by obscurity). I think as long as there is a filesystem-based exchange, it can only be considered secure as long as instructors have different UIDs from students (and then everything works nicely and securely). If not true, nothing will work.
So in short, the world (cloud deployments) has sort of moved beyond the concept of users with their own uid, so there's just a bit of a culture mismatch now, not some fundamental flaw with the old generation. I don't think there is need to do such big changes to the filesystem-based exchange, and instead wait for an API-based exchange (modern solution to a modern world). But of course I'm not the one doing anything about it!
... perhaps we could add some instructions for changing UIDs of the formgrader service notebooks?
username_2: One hackish idea I had to solve this problem was to add 50 extra decimal digits to the end of the timestamp microseconds. That provides enough entropy that it could be considered secure, and requires no other changes (assuming that all the parsers can handle the excess digits and ignore them). They should probably be written only to timestamp.txt and not used as part of the filenames.
I don't really like the idea since it's repurposing something which is not obviously security related into something security related. But the simplicity is nice.
username_1: There are actually libraries which use processor cycles to create timestamps (in addition to normal timestamp). I don't know how this works however when there multiple processors and kubernetes in the mix.
username_3: Once you push the exchange through an API, you can do stuff like check user IDs [feedback is for a notebook belonging to user `<userID>`, only said user can download it...
username_1: https://github.com/jupyter/nbgrader/issues/1083#issuecomment-552114811
One of the problems with current feedback system if instructor is ever required to change the contents of the notebook.
username_3: This is a problem anyway... If a lecturer releases an updated assignment after a student has fetched it, the update won't show.
..... and even if it did, you now have the problem that there's the original released version, the student worked-on version, and now a changed-instructor version - how do you resolve the differences?
Simple: you don't - if you mess up tje assignment, make a new one.... just as you would if this was a paper exercise.
username_1: Yeah, basically to roll updates to the students, we would need a system where:
- Student has a websocket connection to the server, which would then transmit changes to the notebook (i.e. student subscribes to a channel which is dedicated to the assignment notebook and it sends updates with cell IDs. Some kind of version check for the notebook so client would be notified that there are changes).
- Non-student solution cells would be updated without student intervention
- For solution cells Git Diff/Merge -styled window opens and student can choose which parts he includes. |
amcharts/amcharts4 | 983782148 | Title: Scrollbar zooms in on wrong values when zooming backwards
Question:
username_0: **Bug description**
I see that the zoom function works properly on all demoes on the site, but on my dataset it seems to zoom into the wrong values range, especially when going backwards in time. I've set up a codepen that is able to reproduce the bug:
https://codepen.io/palmustasj/pen/wveMOOE
**Environment (if applicable)**
amCharts 4
Answers:
username_1: [Date base data must be sorted in date-ascending order](https://www.amcharts.com/docs/v4/concepts/data/#Date_based_data). Your date is in date-descending order
Status: Issue closed
username_0: Ah. Somehow missed that in the documentation. That cleared everything up. Thanks. |
alloy-lang/alloy | 55888482 | Title: please consider joing ing
Question:
username_0: https://github.com/Henrys-minecraft/minecraft-clinet
this is my new project, please join cause we need mney thnkx :100:
Answers:
username_0: why have u not joined yet @username_1
Status: Issue closed
username_2: lol
username_1: its you isn't it?
username_2: no
username_2: look at his page, he's commenting on loads of other shit too
username_1: lol
how can we block him?
username_2: no clue
username_1: fuck why is he using my name asshat
username_2: i dunno, just leave it looks like he's stopped
username_1: did he invite you too?
username_1: you cheeky fucking shit. it is you
username_2: what it's not me
username_1: lol
Let me give you the giveaways:
- Le Sweden xD
- your website repo has a folder named henry
- you are a shit liar
username_2: it may just be my friend henry
username_1: lol totally
username_3: cheeky skrebs
username_1: mmm bby username_3 whats good
username_3: it's good when you're inside
username_1: pls stop group troll. 'tis not good for health
username_3: Doctor roleplay. Very sexy
username_1: No doubt |
Yummy2016/stat679work | 182934584 | Title: Mark homework1, exercise3 Mingyi, YANG
Question:
username_0: Hi,
The SHA is:
456c109
I have updated the files in hw1/ in several parts:
- Extended the Readme.md file in the hw1/ folder
- Wrote down the shell scripts for exercise 3 in scripts/New.sh
- Added a tag to the old version so that I can check that anytime in the future
- As advised, I redirected the results of exercise2 into the results/ folder. And also, I removed away the .Rhistory in the folder
If there are any problems of my new homework, I'm willing to know about that. Looking forward to your feedback :)
@username_1
@pgonzalez6
Best regards,
Mingyi
Answers:
username_1: Hi, Mingyi,
Nice script!! I have learn some other ways to extract the interested statistics, especially about how to use `bc` command, because I tried `bc` at first in my script, but couldn't find the right way to do the last question...
There is a tiny problem about **summary.csv** file, you may need to delete all the `space` in your file, otherwise it is not a strict ***.csv(Comma-separated values)** file... May be try `sed s/[:space:]//g summary.csv > new.csv`. I found this, because I also made the same mistake in first homework.
Overall, it is a great repo, nice job!
Best,
Peigen
username_1: # code: excellent=+1, satisfactory=0, needs work=-1
name,performance,strategy,style,documentation,creativity,submission
Mingyi,+1,+1,+1,+1,+1,+1
Status: Issue closed
|
djoos-cookbooks/newrelic | 750997529 | Title: Distributed Tracing feature in PHP Agent cookbook doesn't work
Question:
username_0: Setting `node['newrelic']['application_monitoring']['distributed_tracing_enabled'] = true` or `node['newrelic']['application_monitoring']['distributed_tracing_exclude_newrelic_header'] = true` fails to modify `newrelic.ini` to set newrelic.distributed_tracing_enabled or newrelic.distributed_tracing_exclude_newrelic_header to `true`, respectively.
I will be submitting a PR for this correction shortly.<issue_closed>
Status: Issue closed |
stiftungswo/Dime | 345250612 | Title: PHPPdf\Parser\Exception\ParseException: Xml parsing error "Opening and ending tag mismatch"
Question:
username_0: https://sentry.io/swo/dime/issues/607577122/
```
PHPPdf\Parser\Exception\ParseException: Xml parsing error "Opening and ending tag mismatch: br line 38 and p
" in file "/home/stiftun8/public_html/dime/web/" on line 39 on column 60
File "src/Dime/PrintingBundle/Service/PrintService.php", line 48, in render
$responseContent = $facade->render($pdfContent, $stylesheetContent);
File "src/Dime/OfferBundle/Controller/OfferController.php", line 267, in printOfferAction
return $printService->render('DimeOfferBundle:Offer:print.pdf.twig', ['offer' => $offer], 'DimeOfferBundle:Offer:stylesheet.xml.twig', $header);
File "app/bootstrap.php.cache", line 3253, in handleRaw
$response = call_user_func_array($controller, $arguments);
File "app/bootstrap.php.cache", line 3212, in handle
return $this->handleRaw($request, $type);
File "app/bootstrap.php.cache", line 3366, in handle
$response = parent::handle($request, $type, $catch);
...
(12 additional frame(s) were not displayed)
```
Answers:
username_1: Konnte es nicht nachstellen.
Status: Issue closed
|
npolar/formula | 105391180 | Title: Sometimes data loading fails for formula, but not for surrounding angular-app
Question:
username_0: [Data](http://api.npolar.no/indicator/d043c43b-1f82-500e-8b6e-8ee2b1cc689d)
Notice also that the titles count (0) is wrong in the editor (should be 2)
Status: Issue closed
Answers:
username_0: [Data](http://api.npolar.no/indicator/d043c43b-1f82-500e-8b6e-8ee2b1cc689d)
Notice also that the titles count (0) is wrong in the editor (should be 2)
Status: Issue closed
username_0: Closing, even if sometimes seen on mobile devices/mobile net |
census-instrumentation/opencensus-php | 255684102 | Title: [GoogleCloudReporter] Reports error upon failure
Question:
username_0: Now the exception is thrown away at https://github.com/census-instrumentation/opencensus-php/blob/f412d0b9cab337b071d7e64e129f077b91643965/src/Trace/Reporter/GoogleCloudReporter.php#L179
Maybe we can get the message from the exception and use `error_log` to notify the failure to users.<issue_closed>
Status: Issue closed |
chuckfairy/node-webcam | 294182883 | Title: It doesn't work when being called from server
Question:
username_0: `{ Error: Command failed: C:\Users\astra\Workspace\qhacks\node_modules\node-webcam\src\bindings\CommandCam\Commandcam.exe /filename ./images/image1517735395599.jpg
Capture device: USB Video Device
Error: 2147943850
Could not run filter graph
at ChildProcess.exithandler (child_process.js:198:12)
at emitTwo (events.js:106:13)
at ChildProcess.emit (events.js:191:7)
at maybeClose (internal/child_process.js:920:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:230:5)
killed: false,
code: 1,
signal: null,
cmd: 'C:\\Users\\astra\\Workspace\\qhacks\\node_modules\\node-webcam\\src\\bindings\\CommandCam\\Commandcam.exe /filename ./images/image1517735395599.jpg' }
`
Answers:
username_1: You are running this on a windows server? I'm trying to see what I should test with. That error gets me a Task Schedule error have you tried anything on this? https://answers.microsoft.com/en-us/windows/forum/windows_7-windows_programs/task-scheduler-failed-to-start-error-2147943850/e79d7d4d-6878-4ad2-bef2-6a4733ff8793
A lot of searches result in an app called GraphEdit. Maybe there are some libraries in that that will help.
Status: Issue closed
|
kubernetes/community | 206993359 | Title: Add a link checker for this repo
Question:
username_0: While looking at the API conventions docs this morning, I found that many of the links in that doc were broken. We should add a link checker for this repo (and eventually other repos in the community).
Answers:
username_0: @username_2 any chance you might be interested in doing this?
username_1: I'd like to work on this. I've looked at verify-links.sh script that @username_2 wrote, but I'm not totally clear on how you want the link checking implemented. Are you wanting something like a scheduled task to regularly scan for dead links? Or just something to manually run from time to time?
username_2: @username_1 it should run as part of the verification checks that are part of the normal kube build process. So basically it should run on every PR. Similar to what is done for lint and gofmt checks.
username_1: I'm most likely overthinking it, but my concern was that if you run on every PR it could cause unrelated errors if an existing link goes dead. Would it make sense to expect an extra arg with the files changed in addition to the repo base dir? Or is it better to just leave it simple and just scan the whole repo and let the reviewer read the error log?
username_2: The purpose behind running the test is exact what you said... to find dead links. So we want things to fail if there area dead links. I suspect the first time we turn this on we'll find a TON and it'll result in the PR to add this to also include a ton of md file fixes. :-)
Now if you're concerned about having to check too many files and it being slow, we should wrapper the call to the checker with code that only calls it for files changed in the current PR/commit.
username_2: For reference, here the repo with the link checker: https://github.com/username_2/vlinker
username_3: If nobody else is already working on this, I'd like to help out.
@username_2 are there any docs about how to set up CI on a new repo? I'm not sure what the conventions are - e.g. is it okay to use Travis or does everything need to go through the existing https://github.com/kubernetes/test-infra?
username_3: Also, instead of writing a new script, perhaps we could use or refactor https://github.com/kubernetes/kubernetes/blob/master/cmd/mungedocs/links.go and run all the mung scripts on a PR?
username_1: I'm currently waiting on code review from @username_2 for some enhancements on his link checker. Despite the fact that I've already invested some time in this I agree that if there's something out there that's already maintained it's probably a better option. In case we do go with vlinker I'd also like to know about the CI integration. I was using travis on my own forks for testing while I was whittling down false positives.
username_4: /remove-lifecycle stale
username_5: https://github.com/kubernetes/kubernetes/pull/70052#discussion_r239686687
https://github.com/kubernetes/kubernetes/pull/70052#issuecomment-445112134
Turns out we _have_ a linkchecker in kubernetes/kubernetes: https://github.com/kubernetes/kubernetes/tree/master/cmd/linkcheck
It's just been broken and unused for two years. It was in use before we heavily leaned on kubernetes/website and kubernetes/community the way we do today. I would suggest we break that code out and see what can be done with it.
k/test-infra can be a home for it if there's no more appropriate place for it
username_1: PR is in for the link check, but a separate PR will be needed in this repo for the [wrapper script](https://github.com/kubernetes/kubernetes/blob/master/hack/verify-linkcheck.sh). Changes will need to be made to specify which directories should be scanned or skipped.
username_1: Hi @username_5 I got the PR in (#3324) to actually add in the link checker to CI. I'm hoping someone can let me know where to save the output from the command. Once I update that it should be ready to test. We'll probably need a combination of dead link fixes and whitelisting before it can actually be merged though.
username_6: /priority important-longterm
/lifecycle active
username_6: /remove-lifecycle stale
username_5: /assign
As I'm helping review PR from @username_1 |
rspamd/rspamd | 464091603 | Title: [BUG] SIGHUP causes deadlocks
Question:
username_0: When reload rspamd using kill -s HUP main_rspamd_id, there is a very small probability of deadlock.
At this point, the owner of the lock is an exiting zombie process (rspamd main has not yet recovered resources).
The main rspamd process is blocked in the callback handling SIGHUP.
The main rspamd process status is:
0x00007f066e622f8c in __lll_robust_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
(gdb) bt
#0 0x00007f066e622f8c in __lll_robust_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f066e61e373 in _L_robust_lock_202 () from /lib/x86_64-linux-gnu/libpthread.so.0
#2 0x00007f066e61dcff in __pthread_mutex_lock_full () from /lib/x86_64-linux-gnu/libpthread.so.0
#3 0x000000000046af90 in ?? ()
#4 0x000000000046bf17 in ?? ()
#5 0x000000000046cc41 in rspamd_common_logv ()
#6 0x000000000046ce4d in rspamd_default_logv ()
#7 0x000000000046ced8 in rspamd_default_log_function ()
#8 0x00007f066df36d00 in g_hash_table_foreach () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#9 0x00000000004432af in ?? ()
#10 0x00007f066da887e5 in event_base_loop () from /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#11 0x0000000000434c64 in main ()
(gdb)
The normal rspamd process status is:
0x00007f7664885f8c in __lll_robust_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
(gdb) bt
#0 0x00007f7664885f8c in __lll_robust_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f7664881373 in _L_robust_lock_202 () from /lib/x86_64-linux-gnu/libpthread.so.0
#2 0x00007f7664880cff in __pthread_mutex_lock_full () from /lib/x86_64-linux-gnu/libpthread.so.0
#3 0x000000000046af90 in ?? ()
#4 0x000000000046bf17 in ?? ()
#5 0x000000000046cc41 in rspamd_common_logv ()
#6 0x000000000046ce4d in rspamd_default_logv ()
#7 0x000000000046ced8 in rspamd_default_log_function ()
#8 0x00000000004f9977 in ?? ()
#9 0x00000000004f9b68 in rspamd_redis_pool_connect ()
#10 0x00000000004a16f0 in ?? ()
#11 0x00000000004a1bc4 in ?? ()
#12 0x00000000005b1a66 in ?? ()
#13 0x00000000005a16e0 in lua_pcall ()
#14 0x00000000004a2bee in ?? ()
#15 0x000000000056d946 in redisProcessCallbacks ()
#16 0x00007f7663ceb254 in event_base_loop () from /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#17 0x00000000004445c7 in start_worker ()
#18 0x00000000004f5860 in rspamd_fork_worker ()
#19 0x0000000000441c43 in ?? ()
#20 0x0000000000442de4 in ?? ()
#21 0x00007f7663ceb7e5 in event_base_loop () from /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#22 0x0000000000434c64 in main ()
(gdb)
### Versions
1.8.2 Debian
Answers:
username_1: Just bear in mind that 1.8.2 version is not supported. However, it might be still affecting the actual version (and/or master branch). I will check.
username_0: Should rspamd_hup_handler be added to rspamd_log_nolock? I'm verifying. |
dotnet/csharplang | 347112604 | Title: Proposal: Set namespace for file without indentation or braces
Question:
username_0: It would be nice to reduce the level of indentation by being able to declare a namespace for the whole file.
```csharp
namespace is MyProject.Namespace;
using System;
public class MyClass
{
// Fully qualified typename: MyProject.Namespace.MyClass
}
```
There might be some subtle differences with type searching that people aren't expecting because my example implies that the `using` statements are now inside of the namespace declaration.
The compiler would have to enforce some rules like only one `namespace` declaration per file and it must be at the top of the file.
Answers:
username_1: #137
username_0: Sorry....my github search didn't find that. Thanks!
username_1: No worries. It's like trying to find a specific needle in a needle stack in here.
username_2: Please close.
Status: Issue closed
|
ldx/python-iptables | 50547638 | Title: can't commit: b'Resource temporarily unavailable'
Question:
username_0: Sometimes commit() throws exception if commits aren't far from each other. Looks like it requires refresh(). The same problem if I edit iptables calling the iptables binary before commit() is called.
#BAD
#get chain from table
table = iptc.Table(iptc.Table.NAT)
table.autocommit = False
chain = iptc.Chain(table, 'PREROUTING')
#delete old rules
for rule in chain.rules:
chain.delete_rule(rule)
#commit updates
table.commit()
#create rule
rule = iptc.Rule()
#set it up
...
#add rule
chain.insert_rule(rule)
#commit updates
table.commit()
#GOOD
#get chain from table
table = iptc.Table(iptc.Table.NAT)
table.refresh()
table.autocommit = False
chain = iptc.Chain(table, 'PREROUTING')
#delete old rules
for rule in chain.rules:
chain.delete_rule(rule)
#commit updates
table.commit()
table.refresh()
#create rule
rule = iptc.Rule()
#set it up
...
#add rule
chain.insert_rule(rule)
#commit updates
table.commit()
table.refresh()
Status: Issue closed
Answers:
username_1: No reply, closing.
username_2: Adding refresh after a call to commit fixed the issue for me
username_3: I'm seeing the same problem and even a `refresh()` after a `commit()` doesn't work.
So far iIve had some success with also adding a `refresh()` after instantiating `iptc.Table`
Can we re-open this?
username_1: @username_3 sure. Do you have some example code that reproduces the problem?
username_1: Sometimes commit() throws exception if commits aren't far from each other. Looks like it requires refresh(). The same problem if I edit iptables calling the iptables binary before commit() is called.
#BAD
#get chain from table
table = iptc.Table(iptc.Table.NAT)
table.autocommit = False
chain = iptc.Chain(table, 'PREROUTING')
#delete old rules
for rule in chain.rules:
chain.delete_rule(rule)
#commit updates
table.commit()
#create rule
rule = iptc.Rule()
#set it up
...
#add rule
chain.insert_rule(rule)
#commit updates
table.commit()
#GOOD
#get chain from table
table = iptc.Table(iptc.Table.NAT)
table.refresh()
table.autocommit = False
chain = iptc.Chain(table, 'PREROUTING')
#delete old rules
for rule in chain.rules:
chain.delete_rule(rule)
#commit updates
table.commit()
table.refresh()
#create rule
rule = iptc.Rule()
#set it up
...
#add rule
chain.insert_rule(rule)
#commit updates
table.commit()
table.refresh()
username_3: @username_1 thanks. I'll work on stripping my code down into a usable example.
btw, my comment about adding a refresh after insantiating `iptc.Table` was actually a fluke. I've hit the problem again since.
username_3: @username_1 here's a script I came up with to reproduce my issue:
https://gist.github.com/username_3/9b95d9df5b89fafbf4dc
Note that even though the iptables operations are serialized with a lock, the script still fails with a "Resource temporarily unavailable".
That being said, I don't seem to be able to his the problem when using multiple threads, just multiple processes.
username_1: @username_3 that makes sense - iptables uses its own lock internally, and will give you the "Resource temporarily unavailable" error message when you're trying to perform multiple operations at the same time.
Do you have a use case for parallel iptables operations? You might need to serialize these e.g. via a queue and a worker process.
username_1: Another thing that will help: you should call `table.refresh()` before `table.commit()` to make sure your the table is up to date:
#!/bin/env python3
import iptc
import multiprocessing
def drop(src, dst):
"""Drop all packets from src to dst"""
table = iptc.Table(iptc.Table.FILTER)
rule = iptc.Rule()
rule.target = iptc.Target(rule, 'DROP')
rule.src = src
rule.dst = dst
chain = iptc.Chain(table, "FORWARD")
chain.insert_rule(rule)
table.refresh()
table.commit()
table.refresh()
def cleanup(ips):
"""Cleanup all the rules for the given ips."""
table = iptc.Table(iptc.Table.FILTER)
chain = iptc.Chain(table, "FORWARD")
for rule in chain.rules:
src = rule.src.split('/')[0]
dst = rule.dst.split('/')[0]
if src in ips or dst in ips:
chain.delete_rule(rule)
table.refresh()
table.commit()
table.refresh()
def process(i):
"""add and then delete a bunch of rules."""
srcs, dests = subnet_ips(i)
for src, dst in zip(srcs, dests):
with lock:
drop(src, dst)
with lock:
cleanup(srcs + dests)
def subnet_ips(i):
"""Generate some bogus ips"""
srcs = []
dests = []
for j in range(10):
srcs.append('172.{}.0.{}'.format(i, j))
for j in range(10, 20):
dests.append('172.{}.0.{}'.format(i, j))
return srcs, dests
def init(l):
"""initialze the shared lock for the multiprocessing pool."""
global lock
lock = l
def main():
l = multiprocessing.Lock() # with locking, its a bit harder to reproduce, but still happens
table = iptc.Table(iptc.Table.FILTER)
table.autocommit = False
pool = multiprocessing.Pool(initializer=init, initargs=(l,))
pool.map(process, range(253))
pool.close()
pool.join()
if __name__ == '__main__':
for x in range(100):
print x
main()
username_3: If you look closely at my example I actually do serialize the operations with a multiprocessing.Lock, which is why I'm confused.
Of course, I could also be doing it wrong...
username_1: @username_3 correct, pls see the code example above - this works for me without errors.
username_3: That solution works for me too, thanks! (although i don't really understand why)
My use case is probably not that common. I have a suite of tests that uses iptables to simulate network partitions between docker containers and I ran into this attempting to parallelize those tests.
username_1: The two main changes in my code:
- Disable `autocommit` when starting up. No reason to enable it if before a `commit()` it always gets disabled.
- Calling `refresh()` makes sure that we don't have a "stale" state cached in iptables before a commit.
username_1: Closing since this seems to be fixed.
Status: Issue closed
username_4: import os
import iptc
for x in range(3):
t = iptc.Table('filter', autocommit=False)
c = iptc.Chain(t, 'INPUT')
r = iptc.Rule(chain=c)
r.set_in_interface('lo')
r.target = iptc.Target(r, 'LOG')
c.insert_rule(r)
t.commit()
t.refresh()
os.system('/sbin/iptables -D INPUT -i lo -j LOG')
when execute external iptables or iptables-restore iptc hang
username_1: @username_4 any reason you call iptables via `os.system()`? It might cause all sorts of problems and interfere with python-iptables.
username_4: this is working example code..
my project working with tornadoweb. anybody can change iptables rule from command line while my code is running. not a turning back, restart required. |
Opteo/google-ads-api | 1172215508 | Title: How to get the CustomerID of the account I'm currently logged into?
Question:
username_0: But how would I get the customerID just passing the refreshToken? it's possible?
Answers:
username_1: Hi @username_0, you can retrieve the Customer ID like the following:
```ts
import { GoogleAdsApi } from "google-ads-api";
const client = new GoogleAdsApi({
client_id: "<CLIENT-ID>",
client_secret: "<CLIENT-SECRET>",
developer_token: "<DEVELOPER-TOKEN>",
});
const customer = client.Customer({
customer_id: "1234567890",
refresh_token: "<REFRESH-TOKEN>",
});
// If you lose the value above passed into client.Customer
// you can access it on the customer instance with credentials.customer_id
console.log(customer.credentials.customer_id)
```
username_0: But how would I get the customerID just passing the refreshToken? it's possible?
username_2: @username_0 looks like there is a ```client.listAccessibleCustomers(refreshToken)``` method for that... not sure if it works, but it is there. |
sindresorhus/strip-json-comments-cli | 149864988 | Title: would like to use .jsonc as commented json input file and .json as stripped automatic outpu
Question:
username_0: something like
strip-json-comments-cli *.jsonc
and it would write a .json for every .jsonc provided.
using glob as well it could be quite useful.
Answers:
username_0: Found an altertative.
http://json5.org/
json5 -c path/to/foo.json5 # generates path/to/foo.json
Status: Issue closed
username_1: Yeah, this is probably not in the scope for this module. The CLI aims to be simple and just write to `stdout`. |
jcmturner/gokrb5 | 810623741 | Title: Creds cache is no reloaded
Question:
username_0: Although it isn't possible for gokrb5 to refresh TGTs obtained from a credentials cache, it is possible (and common in Enterprise settings), to refresh ccaches using an external process. When this happens, it would be great if gokrb5 would pick up the new creds.
The MIT libs read the ccache every time they want a ticket. We could do the same, or we could detect changes (inotify?) and update the session cache when the external cache is updated.
I can submit a PR if this is something that would be considered for inclusion.
Answers:
username_0: I had a play around with this and wanted to get feedback on a suggest way that this could work.
Other common kerberos client libraries read the credentials cache every time that they need a ticket, and also store service tickets in the cache as well as TGTs. Considering support for non-file caches, I propose that we introduce a CredentialCache interface that Client uses to store and retrieve tickets. Implementations of that interface could be memory (like today's implementation), FILE, KCM (Heimdal/MacOS), KEYRING. Write-back would be supported as well as read to fully interoperate with other processes using the same ccache.
Perhaps the memory implementation would use inotify on a backing file cache to reload it every time it changes.. or perhaps a memory implementation isn't necessary -- ccache reads seem to take in the order of 0.5ms in my testing. Either way, hiding this behind an interface makes changes or additions to the ccache space simpler.
Additionally adding an Initiator interface (similar to JGSS) that is responsible for ASREQ operations to prime a ccache would make sense. We could have a null initiator that is used by folks who have an external process keeping TGTs alive in the ccache, and also keytab and password initiators that will do the same for those initial credential types. That makes it easy to implement other initial auth methods in the future (like PKINIT for example).
Of course all this would require a new major version. I will have something to share soon but am keen for feedback on the approach before finishing an implementation..
username_1: @username_0 is this something you're actively working on? I've just come up against this road block myself, and have started considering options - implementing the CredentialCache interface is exactly what I was thinking of doing, but I'd hate to duplicate work already started. |
dotnet/roslyn | 92753939 | Title: WorkFlow errors not shown during build results
Question:
username_0: Open a Workflow Service Application and create a variable with invalid Default values like below

You can see the blue warning signs next to the default values.
If we build the solution at this state, we see no errors or warnings shown |
diadoc/diadocapi-docs | 232843597 | Title: ParseAcceptanceCertificateSellerTitleXmlFromFile
Question:
username_0: Добрый день.
Тут возникли проблемы с одним актом (во вложение).
Распарсивая его функцией - ParseAcceptanceCertificateSellerTitleXmlFromFile, в структуре AcceptanceCertificateSellerTitleInfo.Seller - наименование продавца и его инн не верны.
Наименование: --- ---, инн - пустое. Только кпп нормально определился. И как следствие нет возможности идентифицировать клиента в нашей системе.
Такое чувство, что он взял эти данные из
<СвФЛ>
<ФИОИП Имя="---" Фамилия="---" />
</СвФЛ>
но тогда непонятно почему определился кпп...
Возникает вопрос, это не верно созданный акт приходит к нам или ошибка в вашей функции или?
С уважением,
<NAME>.
[test.zip](https://github.com/diadoc/diadocapi-docs/files/1044518/test.zip)
Answers:
username_1: Добрый день!
Это не верно созданный акт. В ИдСв не корректно одновременно указывать СвЮЛ и СвФЛ.
Status: Issue closed
|
json-schema-language/spec | 470863929 | Title: Section 3.3.3 "Therefore" should be "Thus"
Question:
username_0: This is just a question of using more idiomatic American English. Both are correct, but in:
```
Note that both 10 and 10.0 encode values with zero fractional part.
10.5 encodes a number with a non-zero fractional part. Therefore
{"type": "int8"} accepts 10 and 10.0, but not 10.5.
```
"Thus" is a more proper term, as the preceding facts aren't quite the _cause_ of the behavior, rather the _explanation_. This is just a nit, but if there's gonna be another draft, might as well fix it.<issue_closed>
Status: Issue closed |
nroutasuo/level13 | 308137150 | Title: Tiny map generation
Question:
username_0: https://imgur.com/a/cGyI9#WKxTeeJ This map just consists of 14 tiles in a straight line. Too small to do really anything.
A suggestion to solve this would be checking after the map generation how many tiles it has and if it isn't enough it resets and tries again.
Answers:
username_1: I had this happen to me on level 12 once. It turns out the rest of the level actually *was* there, it just wasn't connected. The map generation definitely needs some sanity checks attached to it.
username_2: I have just hit this on lvl 11. Is there a way around this? There is no stairs to another level
username_1: Not that I'm aware of. If it's a local save you can use cheats to teleport, but I wound up starting over. If you could, can you mark down the game seed? I can help you find it later today if you have trouble.
username_2: It isn't a local save. How can I find the seed?
username_1: You can either put this in the address bar (make sure to add the `javascript:` at the beginning, most browsers won't let you paste it):
`javascript:alert('World Seed: '+JSON.parse(localStorage.getItem('gameState')).worldSeed)`
Or you can hit `F12` and paste this in and hit enter:
`console.clear();console.log('%cWorld Seed: %c'+JSON.parse(localStorage.getItem('gameState')).worldSeed,'font-size:24px;color:blue','font-size:30px;color:black')`
Either way will give you the seed that's currently saved to that specific URL and browser. 😄
username_2: The seed was 8463
username_1: Here's another snippet, this one will let you enter a seed when you click restart and build the world with the new seed. This way we can manually check out specific seeds. This also might be helpful for #36.
`$('#btn-restart').one("click",function() { let n = parseInt(prompt('Seed number?')); require.s.contexts._.defined['game/worldcreator/WorldCreatorRandom'].getNewSeed = () => n; })`
username_3: I attempted to reproduce this but man it takes a long time to get down to level 11 starting from scratch. I'm going to give up and just assume the original report is true :)
username_2: I could provide a save if exporting was implemented.
It should be possible to give yourself a boost in order to make it through the game quickly using the console. Try cheating using https://username_4.github.io/level13/?file:///
username_4: Thank you for all the seeds and details so far! With these it should be possible for me to track down the problem. I suspect it's a problem in the "random" number generation but could also be that some of the checks for max / min sectors per level or line just aren't working.
username_5: I have a similar issue with my seed 5587
https://imgur.com/a/7LJfq
username_4: The problem with disconnected levels has been fixed in master and will be fixed in the next update. Thank you again for the reports!
I'll definitely look into some more sanity checks for the whole generation in the future.
Status: Issue closed
|
FruitBats/ourfavorites | 265833656 | Title: When the dog bites, when the bee stings, when I'm feeling sad
Question:
username_0: I SIMPLY REMEMBER MY FAVORITE THINGS
AND THEN I DON'T FEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEL
SOOOOO BAAAAAAAAAAAAAAAAAAAD
The issue is that this song is stuck in my head, send help |
ONElua/ONEMenu-for-PSVita | 384705555 | Title: Icon art not loading
Question:
username_0: I think I hit a limit for maximum number of icons that will show artwork. Quite a lot of icons at the far right of the list/wheel? do not load the art. I can scroll ahead and see the ones to the left loading, until it reaches a certain point then no more load. Is this a known limitation? Can it be worked around by say adding folder support?
Using PSTV, v3.68, spoofed as 3.69 using h-encore. SD2Vita (128 GB) as uma0, USB memory stick (256gb Sandisk ultra fit) as ux0.
List - No. Icons/bubbles - Artwork display status
PS VITA GAMES - 173 - All artwork
Homebrews - 18 - All artwork
Adreneline Bubbles - 201 - 136 with outwork, 65 without
I have added and removed various ps vita games and adrenaline bubbles. It seems artwork can only be displayed for 327 icons.
Answers:
username_1: It is not limiting onemenu is the memory in the Vita, since they require a process to load and release images
Status: Issue closed
username_0: Cool, understood. Reckon there's away around this? |
Krypton-Suite/Standard-Toolkit | 982493479 | Title: [Feature Request]: [Feature Request]: Track Bar Back Color
Question:
username_0: Splitting this off from #296 to allow focused delivery of component functionality
Answers:
username_0: Or "Should be transparent" like the KLabel / KCheckbox (etc.) controls
username_1: Making it transparent like KLabel would work. Or handle transparency for the TrackBar.BackStyle palette definition.
username_0: There is a "Hidden" public field
``` c#
public Form1()
{
InitializeComponent();
this.kryptonTrackBar15.DrawBackground = false;
}
```
I will be making this "public in the designer";
But you can start to use it straight away 👍
username_0: 
username_1: I don't have that property in the designer or programmatically. I checked in the stable and canary versions just to be sure. ???

username_0: It is (Was) set to not be browsable, so will not show up in intellisense:
``` c#
[EditorBrowsable(EditorBrowsableState.Never)]
[Browsable(false)]
public bool DrawBackground
{
get => !_drawTrackBar.IgnoreRender;
set => _drawTrackBar.IgnoreRender = !value;
}
```
Just type it in and set it.
username_1: Beautiful! I didn't realize intellisense could be set up to hide a public property.
username_0: Done:

Status: Issue closed
|
joaomlourenco/novathesis | 196295382 | Title: FCT-UNL PhD for "graus em associação"
Question:
username_0: _From @ruimcportela on October 27, 2016 14:19_
Como falamos na aula de latex (prof <NAME>), não existe template para a capa de doutoramento para graus em associação. Por isso venho pedir para fazer um template em latex (pdf em anexo - doc disponível no CLIP).
Obrigado,
Rui
[Formatação de capas de dissertações (graus em associação).doc.pdf](https://github.com/username_0/unlthesis/files/556090/Formatacao.de.capas.de.dissertacoes.graus.em.associacao.doc.pdf)
_Copied from original issue: username_0/unlthesis#45_
Answers:
username_0: Working on it…
Status: Issue closed
username_0: This issue was moved to username_0/unlthesis#52 |
trailofbits/polytracker | 765915191 | Title: 普洱妹子真实找上门服务【威信IO77I9O9美女】
Question:
username_0: 普洱哪里有真实大保健(找特色服务妹子(十微IO77I9O9)月日,京东将其原有“特价秒杀”频道正式更名为“每日特价”,主打下沉市场。升级后的“每日特价”,与“京东秒杀”以及“品牌闪购”共同组成了京东秒杀业务的全新营销平台。此外,今年月日的京东“秒杀嗨购日”还将进行全面升级,依托全新的营销平台,向品牌商家免费开放各类战略级资源,共同为消费者打造一场秒杀狂欢盛典。当然,此次京东对其秒杀业务进行调整,不只是将“特价秒杀”进行更名这一常规升级,其背后更是凸显了京东秒杀业务全新营销平台愈发清晰的运营逻辑。据悉,京东秒杀业务还将推出一个“百千亿计划”,即未来年重点孵化余家销售达百万级,超家销售达千万级的“超级工厂店”,通过与产业带的深入合作,助力众多产业带成为突破亿大关的“超级产业带”,并将打造超万款极致性价比的商品进入下沉市场。战略地位提升:三大业务构成全新营销平台从京东首页的页面显示来看,此次绝非“特价秒杀”更名为“每日特价”那么简单。最直观的一个感受是——“京东秒杀”、“每日特价”、“品牌闪购”在首页的入口明显增大,这在寸土寸金的首页而言意味着战略地位具有大幅提升,而且三者的关系更加紧密,更具协同性。具体来讲,全新升级的“每日特价”重点推出了“疯抢”玩法,涉及家电、数码、个护、美妆、时尚等全品类商品,极大的满足了价格敏感用户的需求。但是价格低不代表品质低,京东“每日特价”在保证品质的基础上依旧具备价格优势的背后,是对供应链强有力的把控,推动孵化产业带、厂家直购、反向定制,都在努力减少商品在生产、流通、售卖环节的成本压力,使的“每日特价”的商品能够在价格和质量之间寻找到一个平衡点,满足三、四线及以下市场价格敏感用户对品质的需求。而“京东秒杀”作为京东首页的黄金频道,承载着大量的秒杀业务,其以爆款单品为核心,围绕用户、体验、效率等方面,在全品类商品中满足不同阶层人群的消费需求打造站内最高效的爆款营销渠道。“品牌闪购”则是京东联合大牌进行秒杀闪购,聚合品牌资源,在满足一二线城市消费者对品质追求同时,还能帮助下沉市场中的“隐形新中产”挑选低价格高价值的商品。升级后的“每日特价”、“京东秒杀”与“品牌闪购”三大业务板块各有侧重,也形成了更强的互补性,这样打造的京东秒杀业务全新的营销平台拥有更强的推广力度和资源协调能力,可以给商家提供完整的营销方案,使得品牌商家能充分借助平台势能,全面有效触达更多领域的消费者。资源无门槛开放:品牌商家共享市场红利下沉市场的机遇已摆在了大家面前,但如何能把握住成为绝大部分品牌商家面临的难题。京东升级后的“每日特价”突破以往传统秒杀模式的限制,开放商家海选机制以及各类战略级资源,将全品类商品资源彻底交给商家。所有品牌商家都可以免费提报参与活动,并通过“千人千面”的形式把商品精准触达给目标消费者,这样也可以使得更多中小品牌的优质商品有机会触达到下沉市场。加上原有的以爆款单品营销为主的“京东秒杀”频道,以及以品牌特卖为核心的“品牌闪购”频道,更全面的满足了从一二线到三四线等全线市场消费者需求。对于品牌商家而言,有了更为开放和公平的营销平台,可以获取更多的流量曝光机会,特别是中小型商家的优质产品也能够快速被消费者认知。值得一提的是,京东今年月日举行的“秒杀嗨购日”也将随之进行全面升级,通过全新的秒杀业务营销平台为品牌商家带来更低的参与门槛、更多的参与形式、以及更高效的营销互动,从而打造成一个重点掘金下沉市场,但又覆盖全域市场的秒杀狂欢盛典。声明:中华娱乐网刊载此文出于传递更多信息之目的,并非意味着赞同其观点或证实其描述。版权归作者所有,更多同类文章敬请浏览:综合资讯稻善逃梦忧潦按凹放珊咐帜偈仙材https://github.com/trailofbits/polytracker/issues/2150 <br />https://github.com/trailofbits/polytracker/issues/2084 <br />https://github.com/trailofbits/polytracker/issues/2417 <br />https://github.com/trailofbits/polytracker/issues/2752 <br />https://github.com/trailofbits/polytracker/issues/3085 <br />https://github.com/trailofbits/polytracker/issues/3418 <br />https://github.com/trailofbits/polytracker/issues/2691 <br />https://github.com/trailofbits/polytracker/issues/2653 <br />bergmmqeizzwqbalytuohyyeq |
BEEmod/BEE2-items | 541929275 | Title: Reflection gel splat editor model is rotated 90° compared to the other models
Question:
username_0: The title says it all.

Before someone says, "it's just rotated!", know that all items default to the same orientation when placed directly onto a floor. This can also be seen in the Hammer model browser, by scrolling between the models. The dropper is oriented correctly. |
springdoc/springdoc-openapi | 794912381 | Title: springdoc-openapi-ui: I did not customize, but overwrite dummy data by default
Question:
username_0: https://github.com/springdoc/springdoc-openapi/issues/1042
Unfortunately my opinion is not about a customization by OpenAPI Bean but **default settings.**
Even though I didn't customize, the data in my openapi3 spec file is not reflected properly.
Answers:
username_1: You don't need to open new issues.
You can add your comments to your original ticket. If relevant, it will be reopened.
This ticket will be deleted.
Status: Issue closed
|
linkerd/linkerd2 | 829250934 | Title: cli: Improve `install` error message when control plane already exists
Question:
username_0: ```
:; linkerd install | k apply -f -
Can't install the Linkerd control plane in the 'linkerd' namespace. Reason: ConfigMap/linkerd-config already exists.
If this is expected, use the --ignore-cluster flag to continue the installation.
error: no objects passed to apply
```
This error suggests that users run with `--ignore-cluster` instead of using `upgrade`... We should primarily encourage users to use `upgrade` here. I don't think we should talk about `--ignore-cluster` in this error message at all.<issue_closed>
Status: Issue closed |
uci-ml-repo/ucimlrepo-feedback | 929226460 | Title: Applying filter after paging through results doesn't check if page is out of bounds
Question:
username_0: It seems that there's no check that, after applying a new filter, the current page is still valid for the filtered result set.
**To Reproduce**
Steps to reproduce the behavior:
1. Advance 2 pages of results before searching or filtering
2. Apply filter "Data Set Characteristics: Sequential"
3. Apply filter "Subject Area: Business"
4. See error
**Expected behavior**
Either the page being changed to the last valid page in the results or reset to the first page of results. That is, I would not expect a 0-hit view when there are results.
**Screenshots**

**Desktop:**
- OS: Win10
- Chrome
- v91.0.4472.114<issue_closed>
Status: Issue closed |
c-blake/cligen | 702574630 | Title: Is it possible to tag 1.2.1 release please?
Question:
username_0: Nimble takes latest tagged version of cligen for Nim tests, hence we need a version of cligen with
https://github.com/username_1/cligen/commit/9890d72fb0b99b8c1df132df85bda164103a6250 changes in order to fix test failures for https://github.com/nim-lang/Nim/pull/15332
Thanks
Answers:
username_1: Done. If something is still broken, feel free to submit another PR/ask for another final point release. I know it's sometimes a pain getting all those packages fixed. Good luck.
Status: Issue closed
|
ave-hikari/tips | 700616239 | Title: test
Question:
username_0: | issue | 作成者 | closeまでの日数 |,| -- | -- | -- |,| [バグレポート 2020/08/21](https://github.com/username_0/tips/issues/11) |username_0|0.2581365740740741日 |,| [テスト用issue](https://github.com/username_0/tips/issues/10) |username_0|9.797106481481482日 |,| [バグレポート 2020/08/02](https://github.com/username_0/tips/issues/6) |username_0|18.884675925925926日 | |
MelvorIdle/melvoridle.github.io | 799493105 | Title: Special attack percentages are off by 1
Question:
username_0: **Describe the bug**
Successful roll determined by: floor(random()*100) <= event_chance.
Just get rid of the equal sign and it works correctly.
**To Reproduce**
Steps to reproduce the behavior:
1. roll=floor(random()*100)
2. event_chance = 25
4. floor(0.25999999*100) <= 25
5. roll <= event_chance TRUE
**Where to find**
Combat.js if(chanceForSpec<=specChances[i]+specCount)
Combat.js if(chanceForSpec<=combatData.enemy.specialAttackChances[i]+specCount)
Answers:
username_1: Better yet, get rid of the floor.
username_2: Fixed in upcoming update
Status: Issue closed
|
geany/geany-plugins | 623637925 | Title: plugin for sprites
Question:
username_0: im trying to make a game with sprites but there doesn't seem to be a plugin listed for using sprites, if there is one then please link me it and if there isn't one can somebody please make it, thanks in advance for anyone that helps. (using geany 1.29 on raspberry pi 3B)
Answers:
username_1: Seems like something that would be better done in a graphics program like GIMP or Tiled or such, no?
Status: Issue closed
|
Iniciativaz/zos-workshop | 619754837 | Title: zIniciativa System não responde
Question:
username_0: Boa tarde,
Tentei encaminhar acessar o mainframe do Lab para encaminhar o Quiz mas está indisponível, podem verificar?
Hoje mais cedo consegui acessar sem problemas.
Answers:
username_1: O sistema estava em janela de manutenção e já voltou.
Status: Issue closed
|
wenzhixin/bootstrap-table | 606337470 | Title: filter-control dont save page number with cookie
Question:
username_0: <!-- https://github.com/username_5/bootstrap-table/blob/develop/CONTRIBUTING.md#bug-reports -->
Hello everyone.
Sorry for my English. I am using version 1.16.0 and the filter-control extension together with cookie and filtering does not keep the page active in the results. If it works in version 1.15.5
This is the live example:
https://live.bootstrap-table.com/code/username_0/2625
<!-- Love bootstrap-table? Please consider supporting our collective:
👉 https://opencollective.com/bootstrap-table/donate -->
Answers:
username_1: @username_5 @username_4
could you please test this (devlop branch) on your computer ?
I cant reproduce it on my computer, maybe its a "bug" of the online editor ?

username_2: I can confirm that it's a bug and can easily reproduce it. The page number always return to 1 when you add a data-filter-control="select". If you remove the filter, it works perfectly.
username_3: I can confirm that it's a bug
username_4: Example with the fix: https://live.bootstrap-table.com/code/username_4/9667
username_1: @username_4 the fix don't work for me.
If i filter for `$1` then selct the second page and refresh the example page `$1' is still set but i get `No matching records found`.
username_4: Hmm.. that's weird.. let me review that
username_4: I can't reproduce it locally. Can you double check @username_1 ?
username_5: I can reproduce the problem that @username_1 said.
username_4: I can't reproduce it locally.

@username_5 @username_1
username_1: I guess the cdn cache has been refreshed, i can no longer reproduce it on the live editor and locally.
Status: Issue closed
|
ga-wdi-exercises/try-ruby | 229965481 | Title: Mark - Try Ruby
Question:
username_0: 1) Three differences would be the syntax for ruby is much more compact, Ruby gives an output for ever command on the next line, and Ruby is more for the back end than the front end.
2) Three similarities would be that they have many of the same basic types such as numbers, strings and arrays, they both use methods on top of types and they can manipulate types in similar ways.
3) Ruby seems to have a very simple syntax which makes it easier to understand. I like how it has many predefined methods that do simple tasks that would take much longer in javascript. I'm having trouble understanding exactly how and where it will be applied in web development yet however.
Answers:
username_1: I love this also.
👍
Status: Issue closed
|
ryceg/Eigengrau-s-Essential-Establishment-Generator | 646882750 | Title: lib.professions has broken some things.
Question:
username_0: **Describe the bug**

**To Reproduce**
Town Hash (the bit that comes after eigengrausgenerator.com- typically in the format of #adjectiveadjectiveanimal):
Page to navigate to, buttons to press, etc: town description > professions
**Additional context**
Unclear whether it's just that instance; could be more, somewhere.
Answers:
username_1: Fixed via e8f42be58722e2ff86b2e8f7d46494d966129db6
Status: Issue closed
|
Ryochan7/DS4Windows | 914105402 | Title: DS4Windows detects controller, then forcibly stops. Every time the application starts a new controller is added to device manager.
Question:
username_0: **Desktop (please complete the following information):**
- Controller Make and Model: Sony DS4 v.1
- OS: [e.g. Windows 10 Home Build 19042]
- DS4Windows Version: 3.0.8
**Additional context**
Ever since 3.0.7 I've had this issue. I thought 3.0.8 was going to fix it, but it didn't. I tried using older versions but they broke as well. Tried reinstalling ViGEmBus several times.
Answers:
username_1: See this BT tips:
https://github.com/username_2/DS4Windows/wiki/Troubleshooting#gamepad-connection-over-bluetooth-bt
If nothing there helps then you may have a bad luck with BT dongle/chipset or you have a copy-cat DS4 revision1 gamepad and not a genuine Sony gamepad. Often those copy-cats struggle over BT in native DS4 mode.
username_2: At least I finally got around to looking into workarounds for problem number 2. The key to correcting the ViGEmBus problem is to delete the virtual Xbox 360 devices in the Device Manager. You would have to enable **Show hidden devices** and there are really 3 exposed devices per Xbox 360 controller.


I finally looked into automating finding and deleting the virtual devices. There were a lot of old entries on my system from previous experiments. Might have to include something with DS4Windows at some point.
username_2: https://github.com/username_2/PurgeOldXInput
Status: Issue closed
|
alibaba/Sentinel | 355149917 | Title: 日志不够,某些错误很难定位
Question:
username_0: <!-- Here is for bug reports and feature requests ONLY!
If you're looking for help, please check our mail list and the Gitter room.
-->
## Issue Description
Type: *feature request*
### Describe what happened (or what feature you want)
日志不够,某些错误很难定位:
配置写错会导致``com.alibaba.csp.sentinel.Env``的static代码块报错,具体情况同[issues](https://github.com/alibaba/Sentinel/issues/38)
这种错误难以排除,以为日志给的信息不够
### Describe what you expected to happen
建议把``com.alibaba.csp.sentinel.Env``的static代码用try catch包起来,打日志再重新抛出
### How to reproduce it (as minimally and precisely as possible)
1.
2.
3.
### Tell us your environment
### Anything else we need to know?<issue_closed>
Status: Issue closed |
edvin/tornadofx | 245722906 | Title: DataGrid unconsistent update behaviour
Question:
username_0: DataGrid does not update its items when they are removed from a bound data collection.
It's also inconsistent, it seems that the first removal succeeds, after that nothing happens.
The cells update to their correct contents when clicking on them.
Sample code:
```Kotlin
scrollpane() {
content = datagrid(data) {
cellFormat { x -> ... }
}
}
fun x()
{
data.remove( ... )
}
```
I haven't tested if the scrollpane makes a difference.
Note that you can swap the datagrid with a listview - which works fine.
Switching cellFormat to cellCache makes no difference.
Answers:
username_1: Does it only fail when you remove from the bound list, or if you remove from the items property of the datagrid as well? As small runnable sample would be great, but I'll try to investigate myself in the meantime.
username_1: Ah, I see what's going on. Working on it now :)
username_1: I have committed a fix now, so that the itemsPropertyChangeListener is correctly called. Does it work for you now?
Status: Issue closed
username_0: Appears to be fixed :) |
vert-x3/vertx-mysql-postgresql-client | 238317437 | Title: Connect to MySQL w/o password
Question:
username_0: Hi guys,
Today I met an issue to connect to MySQL Server 5.7.17 using account w/o password. I got next log error:
Error 1045 - #28000 - Access denied for user 'root'@'localhost' (using password: YES)
My conf.json:
```
{
"database": {
"host": "127.0.0.1",
"port": 3306,
"maxPoolSize": 1,
"username": "root",
"password": "",
"database": "testdb"
}
}
```
Java:
```
SQLClient client = MySQLClient.createNonShared(vertx, config().getJsonObject("database"));
client.getConnection(result -> {
if (result.failed()) {
logger.info("ERROR");
fut.fail(result.cause());
} else {
logger.info("OK");
}
});
```
P.S. I'm using Vertx 3.4.2, vertx-mysql-postgresql-client 3.4.2.
Answers:
username_1: Did you try not to provide the line `"password": "",` in your JSON?
username_0: Give the same error
username_1: I've tried to set ` String DEFAULT_PASSWORD = null;` instead of ` String DEFAULT_PASSWORD = "<PASSWORD>";` in `io.vertx.ext.asyncsql.MySQLClient`, which showed errors in test `com.github.mauricio.async.db.mysql.exceptions.MySQLException: Error 1045 - #28000 - Access denied for user 'vertx'@'localhost' (using password: NO)`. I've tried a few variations on docker run commands for MySQL but could not manage to connect to one. Can you try to change the line mentioned above and see if it would work with your setup?
username_2: Try to set the password to null instead of the empty string
{
"database": {
"host": "127.0.0.1",
"port": 3306,
"maxPoolSize": 1,
"username": "root",
"password": <PASSWORD>,
"database": "testdb"
}
} |
jeremylong/DependencyCheck | 289151804 | Title: False Positives for async-http-client
Question:
username_0: async-http-client includes dependencies with "netty" in the name resulting in false positives for old versions of netty.
All reported as cpe:/a:netty_project:netty:2.0.38
- async-http-client-netty-utils-2.0.38.jar
- netty-resolver-2.0.38.jar
- netty-codec-dns-2.0.38.jar
- netty-reactive-streams-1.0.8.jar
For dependency:
```xml
<dependency>
<groupId>org.asynchttpclient</groupId>
<artifactId>async-http-client</artifactId>
<version>2.0.38</version>
</dependency>
```
Answers:
username_1: Thanks - yeah, I've seen a lot of FP around libraries used with netty being reported as netty itself. I'm still trying to come up with a better way of handling these. I will include this in the next pass of FP suppression rules.
Thanks!
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.