id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
154803091
Synchronize counts for link and unlink [x] Have you followed the guidelines in our Contributing document? [x] Have you checked to ensure there aren't other open Pull Requests for the same change? [x] Have you added an explanation of what your changes do and why you'd like us to include them? [ ] Have you written new tests for your changes? Here's an example. [x] Have you successfully run brew tests with your changes locally? The output of linking and unlinking a formula can give different counts in some cases, as mentioned in #239: $ brew link neovim Linking /usr/local/Cellar/neovim/0.1.4... 40 symlinks created $ brew unlink neovim Unlinking /usr/local/Cellar/neovim/0.1.4... 66 symlinks removed Investigation confirms that the links created and removed are the same, but unlink counts also directories among symlinks removed. This PR improves the situation to synchronize link and unlink, getting both to talk about 40 symlinks. Verbose logs confirm this is the amount of links created/removed (see below). This PR also takes care to not affect statistics used by other commands, which specifically mention directories. If desired, one could also log the count of created and removed directories in link/unlink's output. Furthermore, this PR restores logging of mkdir operation during verbose linking, which was removed in https://github.com/Homebrew/legacy-homebrew/commit/f899878220668c7c7f0fcf43c6d294a52b7e79ed without a specific rationale. Logging both creations and removals appears more symmetric. In previous discussion in #239, @mikemcquaid explained the issue was known and low-priority, while @xu-cheng claimed this was not an issue and offered an alternative explanation. I investigated the alternative explanation in more detail, and was unable to find confirming evidence, at least in this instance. $ brew -v unlink neovim Unlinking /usr/local/Cellar/neovim/0.1.4... rm /usr/local/bin/nvim rm /usr/local/share/locale/af/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/ca/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/cs/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/cs.cp1250/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/de/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/en_GB/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/eo/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/es/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/fi/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/fr/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/ga/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/it/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/ja/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/ja.euc-jp/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/ja.sjis/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/ko/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/ko.UTF-8/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/nb/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/nl/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/no/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/pl/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/pl.UTF-8/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/pl.cp1250/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/pt_BR/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/ru/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/ru.cp1251/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/sk/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/sk.cp1250/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/sv/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/uk/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/uk.cp1251/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/vi/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/zh_CN/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/zh_CN.UTF-8/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/zh_CN.cp936/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/zh_TW/LC_MESSAGES/nvim.mo rm /usr/local/share/locale/zh_TW.UTF-8/LC_MESSAGES/nvim.mo rm /usr/local/share/man/man1/nvim.1 rm /usr/local/share/nvim rmdir /usr/local/share/locale/zh_TW.UTF-8/LC_MESSAGES rmdir /usr/local/share/locale/zh_TW.UTF-8 rmdir /usr/local/share/locale/zh_CN.cp936/LC_MESSAGES rmdir /usr/local/share/locale/zh_CN.cp936 rmdir /usr/local/share/locale/zh_CN.UTF-8/LC_MESSAGES rmdir /usr/local/share/locale/zh_CN.UTF-8 rmdir /usr/local/share/locale/uk.cp1251/LC_MESSAGES rmdir /usr/local/share/locale/uk.cp1251 rmdir /usr/local/share/locale/sk.cp1250/LC_MESSAGES rmdir /usr/local/share/locale/sk.cp1250 rmdir /usr/local/share/locale/ru.cp1251/LC_MESSAGES rmdir /usr/local/share/locale/ru.cp1251 rmdir /usr/local/share/locale/pl.cp1250/LC_MESSAGES rmdir /usr/local/share/locale/pl.cp1250 rmdir /usr/local/share/locale/pl.UTF-8/LC_MESSAGES rmdir /usr/local/share/locale/pl.UTF-8 rmdir /usr/local/share/locale/no/LC_MESSAGES rmdir /usr/local/share/locale/no rmdir /usr/local/share/locale/ko.UTF-8/LC_MESSAGES rmdir /usr/local/share/locale/ko.UTF-8 rmdir /usr/local/share/locale/ja.sjis/LC_MESSAGES rmdir /usr/local/share/locale/ja.sjis rmdir /usr/local/share/locale/ja.euc-jp/LC_MESSAGES rmdir /usr/local/share/locale/ja.euc-jp rmdir /usr/local/share/locale/cs.cp1250/LC_MESSAGES rmdir /usr/local/share/locale/cs.cp1250 40 symlinks removed $ brew -v link neovim Linking /usr/local/Cellar/neovim/0.1.4... ln -s ../Cellar/neovim/0.1.4/bin/nvim nvim ln -s ../../../../Cellar/neovim/0.1.4/share/locale/af/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/ca/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/cs/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/cs.cp1250 mkdir /usr/local/share/locale/cs.cp1250/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/cs.cp1250/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/de/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/en_GB/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/eo/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/es/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/fi/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/fr/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/ga/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/it/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/ja/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/ja.euc-jp mkdir /usr/local/share/locale/ja.euc-jp/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/ja.euc-jp/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/ja.sjis mkdir /usr/local/share/locale/ja.sjis/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/ja.sjis/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/ko/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/ko.UTF-8 mkdir /usr/local/share/locale/ko.UTF-8/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/ko.UTF-8/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/nb/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/nl/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/no mkdir /usr/local/share/locale/no/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/no/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/pl/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/pl.UTF-8 mkdir /usr/local/share/locale/pl.UTF-8/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/pl.UTF-8/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/pl.cp1250 mkdir /usr/local/share/locale/pl.cp1250/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/pl.cp1250/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/pt_BR/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/ru/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/ru.cp1251 mkdir /usr/local/share/locale/ru.cp1251/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/ru.cp1251/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/sk/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/sk.cp1250 mkdir /usr/local/share/locale/sk.cp1250/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/sk.cp1250/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/sv/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/uk/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/uk.cp1251 mkdir /usr/local/share/locale/uk.cp1251/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/uk.cp1251/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/vi/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/zh_CN/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/zh_CN.UTF-8 mkdir /usr/local/share/locale/zh_CN.UTF-8/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/zh_CN.UTF-8/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/zh_CN.cp936 mkdir /usr/local/share/locale/zh_CN.cp936/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/zh_CN.cp936/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../../Cellar/neovim/0.1.4/share/locale/zh_TW/LC_MESSAGES/nvim.mo nvim.mo mkdir /usr/local/share/locale/zh_TW.UTF-8 mkdir /usr/local/share/locale/zh_TW.UTF-8/LC_MESSAGES ln -s ../../../../Cellar/neovim/0.1.4/share/locale/zh_TW.UTF-8/LC_MESSAGES/nvim.mo nvim.mo ln -s ../../../Cellar/neovim/0.1.4/share/man/man1/nvim.1 nvim.1 ln -s ../Cellar/neovim/0.1.4/share/nvim nvim 40 symlinks created Saving this output to ~/foo and counting link creations and removal with grep confirms they're both indeed 40, and that both directory creations and removals are here 26. This fits the explanation that "66 symlinks removed" counted both symlinks and directories. $ grep '^ln ' ~/foo|wc -l 40 $ grep '^rm ' ~/foo|wc -l 40 $ grep '^rmdir ' ~/foo|wc -l 26 $ grep '^mkdir ' ~/foo|wc -l 26 Nice work here. A suggestion but otherwise 👍. Thanks for jumping on this! LGTM. Will wait for any other maintainer thoughts and otherwise 🚢 LGTM. but FYI, this won't fix all the synchronized number problem. Because depending on your other files in the prefix, how the symlink will be created will be varied.(See Keg#resolve_any_conflicts for more detail. Because depending on your other files in the prefix, how the symlink will be created will be varied.(See Keg#resolve_any_conflicts for more detail. Ah, thanks for the pointer! I think I see what you mean, but it seems that cannot keep happening if I keep repeating link/unlink as in #239. Just to make sure I get it: For each conflict, brew link foo will replace a symlink to a directory pertaining to another formula to symlinks to individual files; brew unlink foo will not remove symlinks for other formulas, and it will not "merge" the symlinks to a directory back into one to a folder, so we arrive at a slightly different situation (which is fine). However, it appears that repeating immediately brew link foo/brew unlink foo will not find again the same conflicts, so there the counts should match. And that's good: otherwise, just repeated linking and unlinking (certainly an odd operation) would "leak" inodes. That's why I opened #239. Anyway: I'm satisfied with the result after this PR. Thanks for your contribution to Homebrew! Without people like you submitting PRs we couldn't run this project. You rock!
gharchive/pull-request
2016-05-13T21:23:32
2025-04-01T06:37:03.682526
{ "authors": [ "Blaisorblade", "mikemcquaid", "xu-cheng" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/pull/242", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
220817802
xcode: 8.3.1 is latest version. Our CI images are already updated with it. when "10.12" then "802.0.38" Is out-of-date FWIW. Latest CLT is: ~> /Library/Developer/CommandLineTools/usr/bin/clang --version Apple LLVM version 8.1.0 (clang-802.0.38) Latest Xcode is: ~> clang --version Apple LLVM version 8.1.0 (clang-802.0.41) I can't read, ignore the comment I posted last night 😓.
gharchive/pull-request
2017-04-11T01:57:03
2025-04-01T06:37:03.685506
{ "authors": [ "DomT4", "MikeMcQuaid" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/pull/2474", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
260360336
install.rb: report caveats for installed formulae Pull Request template: [x] Have you followed the guidelines in our Contributing document? [x] Have you checked to ensure there aren't other open Pull Requests for the same change? [x] Have you added an explanation of what your changes do and why you'd like us to include them? [ ] Have you written new tests for your changes? Here's an example. [ ] Have you successfully run brew tests with your changes locally? Detailed explanation: When installing multiple formulae, caveats might get lost in the normal output that accompanies normal installation. This PR reminds the user of all the caveats at the end of the installation. Caveats are repeated only if there are more than one formulae being installed. Following calls to brew install <formula-with-caveats> does not result in caveat message being printed again. Example: $ brew install go python ... [snip] ... ==> Please take a note of the caveats for the following installed formulae: ==> Caveats for "go": A valid GOPATH is required to use the `go get` command. If $GOPATH is not specified, $HOME/go will be used by default: https://golang.org/doc/code.html#GOPATH You may wish to add the GOROOT-based install location to your PATH: export PATH=$PATH:/usr/local/opt/go/libexec/bin ==> Caveats for "python": This formula installs a python2 executable to /usr/local/bin. If you wish to have this formula's python executable in your PATH then add the following to ~/.bash_profile: export PATH="/usr/local/opt/python/libexec/bin:$PATH" Pip and setuptools have been installed. To update them pip2 install --upgrade pip setuptools You can install Python packages with pip2 install <package> They will install into the site-package directory /usr/local/lib/python2.7/site-packages See: https://docs.brew.sh/Homebrew-and-Python.html now caveats for installed_on_request deps are also reported. What is not taken into account is whether these dependencies are beings upgraded in the current run (not sure if commented out line would do exactly that or not). not ready yet, argh... Closing for now... will reopen when I have more time to work on it.
gharchive/pull-request
2017-09-25T17:46:56
2025-04-01T06:37:03.691045
{ "authors": [ "maxim-belkin" ], "repo": "Homebrew/brew", "url": "https://github.com/Homebrew/brew/pull/3209", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
216835766
:tex is deprecated Hi. I'm new to contributing to homebrew. As my first commit, I was looking for warnings in the homebrew formulas. One package showed the error * :tex is deprecated.. So to fix that error, can I just remove that :tex dependency? Thanks. @MikeMcQuaid can you please help me? Thanks,
gharchive/issue
2017-03-24T16:19:49
2025-04-01T06:37:03.855616
{ "authors": [ "raza15" ], "repo": "Homebrew/homebrew-tex", "url": "https://github.com/Homebrew/homebrew-tex/issues/42", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
123531417
i3 + XQuartz + (maybe) El Cap Hi, I'm trying to get i3 running under XQuartz on my 13" retina Macbook Pro running 10.11.2 and the only last remaining issue is that I can't for the life of me get my meta key working. I've tried every combination of the input preferences checkboxes on XQuartz, I've tried full screen and not full screen, I've tried using .Xmodmap to explicitly map Meta_R and Meta_L to mod1 and I'm stumped. If anyone has this working and can share their .config/i3/config file, along with any other tweaks they had to do in .Xmodmap, .Xresources, or .xinitrc.d/* I would massively appreciate it. Thanks. If you could try installing i3 outside of Homebrew, that would help us confirm that this is a Homebrew-specific problem. Thanks! Trying to build from source but bumping my head against the wall. I have created a CPATH that pulls in the include files I need; however, now I'm hitting an undefined type of CARDINAL. Any suggestions? I searched for documentation on how to build / install on the Mac and outside of building from source, I came up dry. Perhaps my Google-fu is weak... What key are you using for Meta? Option (aka "alt")? One or both sides? And what isn't working about it? From what I understand about XQuartz and Mac keyboards, you don't really have a Meta key, and should maybe be mapping Alt instead. @bceverly I have i3 (4.12 see #210) running on XQuartz (2.7.9) on Mac OS X El Capitan (10.11.4 (15E65)) and use both command keys (⌘) as a modifier. Hope the following helps: % grep 'set $mod' ~/.config/i3/config set $mod Mod2 % xmodmap xmodmap: up to 2 keys per modifier, (keycodes in parentheses): shift Shift_L (0x40), Shift_R (0x44) lock Caps_Lock (0x41) control Control_L (0x43), Control_R (0x46) mod1 Mode_switch (0x42), Mode_switch (0x45) mod2 Meta_L (0x3f), Meta_R (0x47) mod3 mod4 mod5 We'll accept PRs for this but we're not actively working on it at this time. Could anyone share a step by step guide on how to set up i3 on mac osx? I like the i3 style very much, and I got a macbook pro recently, I have installed Ubuntu in order to use i3, but even though Ubuntu is good enough, the energy saving system is much better on osx.
gharchive/issue
2015-12-22T18:19:38
2025-04-01T06:37:03.860871
{ "authors": [ "Michael-Jing", "MikeMcQuaid", "afh", "apjanke", "bceverly", "dunn" ], "repo": "Homebrew/homebrew-x11", "url": "https://github.com/Homebrew/homebrew-x11/issues/170", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
985654893
Basic CLI and Project Structuring Changes Basic CLI Main functions: [preprocess, train, inference] Created project structure Reflecting CLI functions Formatter Utility Syntax Highlighted printing Pretty printing/logging Color Traceback Demo Usage/Help output Syntax output Training output Traceback I changed the style to keep it consistent with the rest of the repo, except in one newly-created file. Let's discuss code style and formatters in discord so that we won't accidentally botch each-others code. I'd also ignore deepsource for now as it complains about all issues from all moved files. Let's fix the problems some other time.
gharchive/pull-request
2021-09-01T20:42:01
2025-04-01T06:37:03.888734
{ "authors": [ "ClashLuke", "bionboy" ], "repo": "HomebrewNLP/HomebrewNLP", "url": "https://github.com/HomebrewNLP/HomebrewNLP/pull/4", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
618619122
请教:如何打开预览项目? 楼主,我该如何正确的打开项目呢? 项目中使用到了云函数,需要初始化几张表: 集合名 等等 articleRelations 文章关联集合 articles 文章集合 banners banne 集合 cardRelations 卡片关联集合 cards 卡片集合 categories 类别集合 userRelations 用户关联集合 @Honye 那是不是得在我自己的云开发环境下,上传云函数 、初始化数据集合 并且修改 app.js 下的初始化云环境
gharchive/issue
2020-05-15T00:41:04
2025-04-01T06:37:03.900055
{ "authors": [ "Honye", "hanyucd" ], "repo": "Honye/weapp-mark", "url": "https://github.com/Honye/weapp-mark/issues/4", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
449993054
local variable 'bcs_corrected' referenced before assignment Due to v1.4.2 being too slow, I tried the develop branch (542ec344fe26b1cbbd72194c6ed7b809a82de19b), as you suggested in the other thread at some point, but it failed with the following error: Loading whitelist Creating barcode tree for whitelist Counting number of reads Started mapping Processing 72 reads CITE-seq-Count is running with one core. Mapping done for process 10349. Processed 8 reads Mapping done Counter({'TCCCAAGCATTAAGCT': 2, 'CGCGGAAGAGCACACG': 2, 'CCTTTCTGAAAGGAGA': 2, 'ACTTGCCACCAAGTCC': 2, 'ACAATTTAGACATGGC': 2, 'ACGGGGAAGGACGTCA': 2, 'CGAATATCCTTAAGAG': 1, 'GCATAAAGTGCACCGC': 1, 'AAGCACATCACCTTGA': 1, 'ATCGGAAGAGCACACG': 1, 'TTAGCATCAACAGGCC': 1, 'CAACCACTAATAGGTA': 1, 'CCTCTAATCGGTCGTC': 1, 'CCCGGCGTACGGGGAA': 1, 'CCATTAATAATGTTTT': 1, 'CGTGAATTCTGAGGCC': 1, 'CCTTTACCAGCTTTAG': 1, 'TTCCATTCTTTAGCTC': 1, 'ATAAATCACCTCACTT': 1, 'AGGCCGTCCGATCTAG': 1, 'AAGTTGCCATACAAAA': 1, 'ATTCAAACGGCCTGTC': 1, 'TAAACGCAAGCCTCAA': 1, 'GCAAAAAATTTAGGGT': 1, 'GGTGGTCTATAGTGTT': 1, 'GCGCGATTCGATCTGC': 1, 'TAGCAAGGCCACGACG': 1, 'TTCGGGAGGGTAGTCG': 1, 'CGTTTGGTCAGTTCCA': 1, 'TTTTCTTCTGCGTCAG': 1, 'CAGTAGACTCCTTCTG': 1, 'GATGTGGTAGAAGTCG': 1, 'GGCTGCGGACGACCAG': 1, 'ATAGCAAAGCCTCTAC': 1, 'GGCGCATAACGATACC': 1, 'GACCAATCTGACCAGC': 1, 'ACGTATTTAGCCACAT': 1, 'AAACGTCGGCTACAGT': 1, 'TGCCCTACTTGCCCTA': 1, 'CTTGCTGCTAAAGGTC': 1, 'CTCGGAGGAGCACACG': 1, 'GTGAGTTGTTCCATTC': 1, 'GAGTCTACACAGTGTT': 1, 'TACTGCTTGTTTACGA': 1, 'AATTCATCCATTAACT': 1, 'TAAGAGACCATCTTAA': 1, 'TCATAAGAGGTTTTAC': 1, 'GATCGAAGAGCACACG': 1, 'CGAGCAGTAGACTCCT': 1, 'CGCATTGCATTCATCA': 1, 'AGATTGAGGCTGGGAA': 1, 'AGAACGTGAAAAAGCG': 1, 'TCTGATTGTCCAGTTG': 1, 'AACGTACCTTCAAGAA': 1, 'AAGGTTCCCGATCTAA': 1, 'AATCCGACCAATCCCA': 1, 'GTACCTCGCAACGGCT': 1, 'GCCGATACTTGGAACA': 1}) Correcting umis Traceback (most recent call last): File "/home/ubuntu/miniconda/bin/CITE-seq-Count", line 11, in <module> sys.exit(main()) File "/home/ubuntu/miniconda/lib/python3.6/site-packages/cite_seq_count/__main__.py", line 474, in main bcs_corrected=bcs_corrected, UnboundLocalError: local variable 'bcs_corrected' referenced before assignment Any idea? Hello @hisplan I see that I made some changes that didn't pass running tests and still pushed them. I guess I needed to work on different systems and wanted the latest changes. I'm gonna work on performance and fixes tomorrow. I'll push some changes by then. Let me know if you see improvements then. You mentioned 1.4.2 being too slow, would you mind telling me which part was problematic for you? I'm passing the 10x v2 whitelist and setting --expected_cells=0. I suspect that CB correction part? Yes, that's the exact issue I want to fix. Trying to not go towards creating an index for it and this is not trivial. Another quick fix for you would be to use the filtered list from 10x v2. It will go way faster. Have you had a chance to work on this issue? I've tried 10x v3 whitelist (millions of barcodes;;) with --expected_cells=0, and it's been running for more than 3 days... Hey @hisplan One quick thing, if you're running a v3 chermistry, I would suggest a cell barcode correction of 0 --bc_collapsing_dist 0 Some barcodes have only on hamming distance between them, so collapsing would be a mistake. I suspect they have some control code in there because they talk about 3M barcodes but the whitelist has 6M. @hisplan On the performance part, I haven't been able to get a great increase in speed. One more thing I have to test is to create an index and use this instead of a tree for barcode recovery. This might take a lot of memory though. You can try this branch. I added some filtering of low content cells to speed up things. Hi @Hoohm, thanks for your quick reply. I also had a chance to talk to 10x a few months ago about the whitelist for 10x v3 chemistry. As far as I understand, the whitelist has 6.8M barcodes, but the half of it is for feature barcoding. And I was told that by design there is a mismatch of 2bp between GEX library and feature barcode. I asked if I could get separated lists for GEX and feature barcode libraries, but they said no;;;; I don't know why, but if you could get it, please do let me know :-) Anyway, given this, would you still suggest --bc_collapsing_dist=0? I've been running with the value 1, but if I recall correctly, I did get a message something like this: Testing cell barcode collapsing threshold of 1 Value is too high, reducing it by 1 Testing cell barcode collapsing threshold of 0 Using 0 for cell barcode collapsing threshold Interesting. I'm gonna try to get the info as well. The response you got is strange because the mapping between cell barcodes for mRNA and ADTs is here and it has 6M lines... I'm very confused :D Hmm, the branch I'm working on should skip the cell barcode correction completely if the number is at 0 whereas the older ones don't skip it properly. Definitely try the brach with this modification. It also stops correcting unmapped UMIs and low UMI cells (1 or 0 UMI for a tag). Both of these filters reduce computation time. It's also reported in the report. I didn't fully understand it either. But that was pretty much the response I got from 10x. And yeah, I'm using the exact same file. The filename suggests 3M, but it actually contains 6M (3M for GEX, 3M for feature barcoding). Anyway, mainly I've been using the official 1.4.2 and the develop branch, but I will try feature/index_whitelist. I'm having two problems with the feature/index_whitelist branch. Problem 1 When I tried to load the matrix file using Seurat 3's Read10X function, it threw this exception: Error in dimnamesGets(x, value) : invalid dimnames given for “dgTMatrix” object Problem 2 Using the --dense parameter threw this exception: Writing dense format output File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 1653, in create_block_manager_from_blocks mgr = BlockManager(blocks, axes) File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 114, in __init__ self._verify_integrity() File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 311, in _verify_integrity construction_error(tot_items, block.shape[1:], self.axes) File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 1691, in construction_error passed, implied)) ValueError: Shape of passed values is (5, 819821), indices imply (4, 819821) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/ubuntu/miniconda/envs/feature-index-whitelist/bin/CITE-seq-Count", line 10, in <module> sys.exit(main()) File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/cite_seq_count/__main__.py", line 483, in main filename='dense_umis.tsv') File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/cite_seq_count/io.py", line 48, in write_dense pandas_dense = pd.DataFrame(sparse_matrix.todense(), columns=columns, index=index) File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/pandas/core/frame.py", line 424, in __init__ copy=copy) File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/pandas/core/internals/construction.py", line 167, in init_ndarray return create_block_manager_from_blocks([values], [columns, index]) File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 1660, in create_block_manager_from_blocks construction_error(tot_items, blocks[0].shape[1:], axes, e) File "/home/ubuntu/miniconda/envs/feature-index-whitelist/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 1691, in construction_error passed, implied)) ValueError: Shape of passed values is (5, 819821), indices imply (4, 819821) Hello @hisplan where are you on this? Have you been able to run it? I'm wrapping things up for a new relase. Great! Don't use the V3 whitelist, it's more than double the number of barcodes. For V3 I would suggest not using the full one and just running without it. Another important thing, the cells you will find in the Protein data with CSC is not going to match the RNA cells. You have to use the mapping they provide on their github page. I feel bit uncomfortable running without the whitelist. The workaround we have now seems fine. Anyway, which mapping file are you referring to? This list You have to map the whitelist you have from the RNA cell barcodes to the Protein cell barcodes. Yuo can use the file I linked here to do that. If you use the whitelist from the RNA directly for the protein, you are actually getting RNA from a call and Protein data from another. Oh, I’m doing this for hashtag. The scRNA-seq count matrix that I mentioned previously has the error-corrected barcodes which were generated using that list from 10x GitHub repo. I'm not sure I understand. Which error-corrected barcodes are you talking about? The cell barcode of the TAGS barcode? The cell barcodes in the scRNA-seq count matrix are already error-corrected. I'm feeding this to CITE-seq-Count via --whitelist (instead of using the whole 10x v3 whitelist). Since CITE-seq-count is working with a smaller set of whitelisted barcodes, the running time was reasonable for me. By the way, you can close this ticket. I will try out the new release once it's out. Yes, that is a sound strategy to use. If you use only the RNA part of V3 and adding your own cell hashing protocol, all good. If you use REAP seq, protein data from 10x V3, you need to use the mapping between RNA cell barcodes and the Protein cell barcodes. I'll close the ticket then :)
gharchive/issue
2019-05-29T19:57:15
2025-04-01T06:37:03.918622
{ "authors": [ "Hoohm", "hisplan" ], "repo": "Hoohm/CITE-seq-Count", "url": "https://github.com/Hoohm/CITE-seq-Count/issues/57", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
200750330
Spaces in username, during logging Expected behavior: Space trimming Actual behavior: A Username or password is incorrect. error. Steps to reproduce: Go on demo, try an ID with a space in front or behind it ; it doesn't work. Great projet BTW! A simple fix would be to just trim the spaces, but couldn't it be a breach in security if you allowed the user access if they accidentally entered a space, or any invalid character, anywhere in the credentials? I don't see why, you just delete the ones at the beginning and at the end and those in between are part of the username. But, should spaces be allowed at all? Usernames shouldn't have spaces in them, and it makes sense to trim any spaces on login. It is a simple change that should produce a better user experience.
gharchive/issue
2017-01-13T22:43:57
2025-04-01T06:37:03.938306
{ "authors": [ "Jaden-Giordano", "jkleinsc", "tleb" ], "repo": "HospitalRun/hospitalrun-frontend", "url": "https://github.com/HospitalRun/hospitalrun-frontend/issues/922", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1358163315
Bennylo/fix node Fixes #[replace brackets with the issue number that your pull request addresses]. Changes proposed in this pull request: [list out summary of changes here] [list out summary of changes here] [list out summary of changes here] [etc] Newly added dependencies with Bundlephobia links: [Link of the new dependency] [Link of the new dependency] [etc] Note: pull requests without proper descriptions may simply be closed without further discussion. We appreciate your contributions, but need to know what you are offering in clearly described format. Provide tests for all code that you add/modify. If you add/modify any components update the storybook. Thanks! (you can delete this text) Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. Opened the wrong PR, closing this now
gharchive/pull-request
2022-09-01T02:36:46
2025-04-01T06:37:03.943378
{ "authors": [ "CLAassistant", "bennymelb" ], "repo": "HospitalRun/hospitalrun-frontend", "url": "https://github.com/HospitalRun/hospitalrun-frontend/pull/2981", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
187679856
your readme image is dead it's back! weird....
gharchive/issue
2016-11-07T10:50:07
2025-04-01T06:37:03.953088
{ "authors": [ "fghhfg" ], "repo": "HubPress/blog.hubpress.io", "url": "https://github.com/HubPress/blog.hubpress.io/issues/12", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2183199366
refactor: date renderer use textParticle to refactor date renderer fix #457 https://github.com/Hufe921/canvas-editor/issues/451#issuecomment-1993630554 @Hufe921 您说的直接删除这个分支是不对的,这样会导致粘连绘制的问题,进行回滚了 rtl 下 签署日期:2022-08-10 17:30:01 显示成这样是不对的,应该是 2022-08-10 17:30:01 :签署日期 也就是说,签署日期和后面的日期要分开绘制,不然渲染结果是错的 rtl 下 签署日期:2022-08-10 17:30:01 显示成这样是不对的,应该是 17:30:01 2022-08-10 :签署日期 没太懂,按照pr代码更改,是怎么实现这个效果的 rtl 下 签署日期:2022-08-10 17:30:01 显示成这样是不对的,应该是 17:30:01 2022-08-10 :签署日期 没太懂,按照pr代码更改,是怎么实现这个效果的 @Hufe921 我解释一下为什么会是这个效果,上面是逻辑分析。 因为,一旦直接交给 textParticle 进行绘制,根据逻辑,其只是按照坐标来绘制 签署日期:2022-08-10 17:30:01,跟普通文本的逐字渲染没有任何差别,但是在 RTL 中这个情况就不一样了。其渲染结果仍然会是 签署日期:2022-08-10 17:30:01,因为这一串字符全都是 LTR 字符,其绘制原点是根据 签 的坐标整体计算的,且每个字符的下个字符都是 LTR,所以会原样渲染。 但是,根据我上面的修改,这个渲染过程分成了两个部分 签署日期: 和 2022-08-10 17:30:01。 签署日期: 绘制根据 签 的坐标为原点来的,结果是 :签署日期,因为冒号是最后一个字符,不存在下一个字符了,按照 RTL 的默认顺序,他就该被绘制在 后面。 2022-08-10 17:30:01 绘制根据 2 的坐标原点来的,结果是 17:30:01 2022-08-10。因为 date 的 value 之间存在空格,相当于是打断,在 ctx.direction='rtl' 的理解下就是三个部分,日期、空格和时间。 因为日期是 LTR 字符,所以这整个整体(含连字符,连字符是因为后面字符也是LTR,所以它变成了从左到右,默认情况下是 RTL)。 空格在 RTL 下,表现也是 RTL,所以在 后面。 而时间跟日期是同理的,冒号也是因为后面的字符是 LTR,所以渲染成了 LTR。 最后就得到了,所实现的效果,其背后的绘制根本不是直接计算原点坐标那么简单的,符号的绘制顺序与下一个字符有关。 RTL 所表示的 后面,就是 LTR 的 前面,所以我在 #451 中提到的,左右的概念在这其实是 “错误的”,因为反直觉。 @Hufe921 验证可以解决,已处理此问题
gharchive/pull-request
2024-03-13T06:15:00
2025-04-01T06:37:03.963720
{ "authors": [ "HerbertHe", "Hufe921" ], "repo": "Hufe921/canvas-editor", "url": "https://github.com/Hufe921/canvas-editor/pull/460", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
390022198
Health check endpoints Add route 53 health check as per https://github.com/HumanCellAtlas/dcp-monitoring/blob/master/README.md ┆Issue Number: HCAB-334 Does this issue even make sense? If the data browser is a JS app with no server side component, there is nowhere to put a health-check endpoint. Exact thats why this is not implemented. It is not super clear what the goal is and how to meet the HCA alerting requirements with this. For the data portal we quickly did this: https://dev.data.humancellatlas.org/health/ but the portal does have individual pages. Not sure what we could test on the browser. That the index.html page loads? We could create a separate health.html page and load that just to check cloudfront and S3 but that wont tell if actual site is working. We could have some testing endpoint run the JS and then look for something in the dom etc. Not sure what the goal is here and what level of effort is justified. Superseded by #398.
gharchive/issue
2018-12-12T01:09:37
2025-04-01T06:37:03.987718
{ "authors": [ "NoopDog", "sampierson", "theathorn" ], "repo": "HumanCellAtlas/data-browser", "url": "https://github.com/HumanCellAtlas/data-browser/issues/397", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1904810882
Enquiring something Did you use image recognition tool in this program Thanks for your question @Zahidpichen1. Please use the Label Studio community Slack forums for questions about label studio integrations.
gharchive/issue
2023-09-20T11:44:21
2025-04-01T06:37:04.000330
{ "authors": [ "Zahidpichen1", "hogepodge" ], "repo": "HumanSignal/label-studio", "url": "https://github.com/HumanSignal/label-studio/issues/4804", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1164622648
Meeting topic overly verbose The default meeting topic does not need #asfmeeting prefix: indeed that is misleading as this is not IRC. It could be something like: Official ASF Members Annual Meeting. You do NOT need to announce yourself. Though it should probably start as: Official ASF Members Annual Meeting. Starts at yyyy-mm-dd hh:mm UTC (in xx minutes) Likewise #backchannel is not really appropriate for the other channel and could be dropped. First bit addressed with b189657
gharchive/issue
2022-03-10T01:07:48
2025-04-01T06:37:04.002863
{ "authors": [ "Humbedooh", "sebbASF" ], "repo": "Humbedooh/asfmm", "url": "https://github.com/Humbedooh/asfmm/issues/11", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
477161201
Unable to fetch data from bitbucket collector Hi , I have tried to configure the bitbucket collector in hygieia .i am facing the following issues. application.properties: dbname=dashboarddb # Database HostName - default is localhost dbhost=localhost # Database Port - default is 27017 dbport=27017 # MongoDB replicaset dbreplicaset=false dbhostport=localhost:27017 # Database Username - default is blank dbusername=dashboarduser # Database Password - default is blank dbpassword=dbpassword # Logging File location logging.file=./logs/bitbucket.log # Collector schedule (required) git.cron=0 0/1 * * * * # Mandatory parameters git.host=mybitbucketrepo.com/ git.api=/rest/api/1.0/ # Maximum number of days to go back in time when fetching commits git.commitThresholdDays=15 logging.level.com.capitalone.dashboard=DEBUG logging.level.com.capitalone.dashboard.collector=DEBUG # Page size for rest calls # Only applicable to Bitbucket Server. # Only applicable to Bitbucket Cloud. git.pageSize=25 # Bitbucket product # Set to 'cloud' to use Bitbucket Cloud (formerly known as Bitbucket) # Set to 'server' to use Bitbucket Server (formerly known as Stash) # More information can be found here: https://github.com/capitalone/Hygieia/issues/609 git.product=server # Bitbucket key for private repos git.key=fM0sYnrWPYxozSNuMX9dcwhhJtkOKkNz bitbucket.key=fM0sYnrWPYxozSNuMX9dcwhhJtkOKkNz Issue resolved. Closing
gharchive/issue
2019-07-25T08:11:07
2025-04-01T06:37:04.028620
{ "authors": [ "Sbrenthughes", "Thirumaran2011" ], "repo": "Hygieia/hygieia-scm-bitbucket-collector", "url": "https://github.com/Hygieia/hygieia-scm-bitbucket-collector/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
729919944
Add emoji reactions to posts and comments [x] For posts change upvoting to emoji reactions. [x] For comments and posts. [x] Mouse over the reactions to see how many of each one there are [x] Show a list of who reacted (and in which way?) [x] Do we want a limited set like FB or all of them like Slack? Let's go with all emojis, but have a default set like Slack [x] What set of default emojis to use?- [ ] Should each post type have a different default set or the same one? [x] How to deal with current "upvotes"? Make them all a certain emoji, make thumbs up? [x] Emoji picker: Maybe use: https://github.com/missive/emoji-mart Or https://github.com/ealush/emoji-picker-react [x] React button will use smiley face icon [ ] How will list view show reactions? [ ] Do we want to add to grid views? evo-node integration checks [ ] old vote graphQL allows for successful 'vote' [ ] old vote graphQL allows for successful removal of 'vote' [ ] post reaction creates reaction [ ] post reaction updates post.reactions and post.num_people_reacts [ ] comment reaction creates reaction [ ] comment reaction updates post.reactions and post.num_people_reacts [ ] post delete reaction creates reaction [ ] post delete reaction updates post.reactions and post.num_people_reacts [ ] comment delete reaction creates reaction [ ] comment delete reaction updates post.reactions and post.num_people_reacts @brodeur Small grid view is the most obviously bad and there are several approaches we could take to shift that. Some of those approaches might be done by Aaron and others might need Tom (if we want to build another component for this use-case) big grid view: At a minimum, scaling down the size of all the components in EmojiRow would go a long way here. We might also want to hide emojis when there are too many List row view: Again scaling down EmojiRow stuff here would be nice. Heya. How's this coming along? 🤞🏼 It's a much anticipated feature for us 😃 It’s so closseee, coming in the next couple weeks. On Nov 14, 2022, at 1:09 PM, M @.***> wrote: Heya. How's this coming along? 🤞🏼 It's a much anticipated feature for us 😃 — Reply to this email directly, view it on GitHub https://github.com/Hylozoic/hylo-evo/issues/676#issuecomment-1314383271, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAA5HKC5P4MN362ADORUTDWIKTCPANCNFSM4S76BS4Q. You are receiving this because you authored the thread. Is this a good place to request a skin tone selector or slider for emoji? (Perhaps it should be added to the nearly-complete list, or else set aside for future development.) I suppose it's a good chunk of work, but it may be increasingly expected for matching the feature set of platforms such as Discord and Slack.
gharchive/issue
2020-10-26T21:26:19
2025-04-01T06:37:04.039947
{ "authors": [ "evolverine", "gcassel", "thomasgwatson", "tibetsprague" ], "repo": "Hylozoic/hylo-evo", "url": "https://github.com/Hylozoic/hylo-evo/issues/676", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1498039791
How to evaluate BLEU score on LM1B? Dear authors, I understand that you plan to release your code on January. But could you share more details regarding how you evaluate the BLEU score and PPL on the LM1B dataset? I am also working on Diffusion Model for text and may potentially cite your paper. Thanks! Hi, We computed the BLEU score with all test data as references and reported the average BLEU score of each generated sentence. We sampled 1K sentences respectively for evaluating BLEU and S-BLEU. For PPL, the ELBO on the test set is an upper bound of token-wise NLL. And we first convert such bound to per-word NLL and use this to get the per-word PPL. Hope this helps! @yujianll Hi, Yes, we sum up NLL for all tokens in the sequence as NLL for the sequence. The validation ELBO is around 110. And the average number of words in each sequence in the test set is around 26. Thus per-word NLL is around 4.23. The test PPL is obtained by exp(4.23). @Hzfinfdu Thanks for the reply! I have another low-level question. When you calculate NLL on test set, do you sum for all T diffusion steps, or do you sample a few time steps for calculation? If you do sample, how many time steps do you use? @yujianll Hi, We trained DiffusionBERT with 512 steps and used DDIM sampling to uniformly sample 128 steps on test set, both for NLL calculation and generation. Hope this helps! Thanks, this helps a lot!
gharchive/issue
2022-12-15T08:55:15
2025-04-01T06:37:04.052278
{ "authors": [ "Hzfinfdu", "jzhang38", "yujianll" ], "repo": "Hzfinfdu/Diffusion-BERT", "url": "https://github.com/Hzfinfdu/Diffusion-BERT/issues/6", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2566548397
Investigar cuOPT NVIDIA cuOpt (cuda Optimization) NVIDIA cuOpt es una biblioteca optimizada basada en GPU diseñada para resolver problemas de optimización en tiempo real. Está especialmente enfocada en problemas de optimización de rutas, gestión de flotas y otros casos en los que la toma de decisiones rápida es crucial para operaciones eficientes. https://build.nvidia.com/nvidia/nvidia-cuopt
gharchive/issue
2024-10-04T14:40:10
2025-04-01T06:37:04.062669
{ "authors": [ "inamoriza", "rafa-casado" ], "repo": "I3A-NavSys/navsim", "url": "https://github.com/I3A-NavSys/navsim/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2132845630
[BUG] Wrong precompiled link on Release Page Applio 3.0.6 Describe the bug A clear and concise description of what the bug is. On release page for Applio 3.0.6, precompiled link points to: https://huggingface.co/IAHispano/applio/resolve/main/Applio V3 Precompiled/ApplioV3.0.6.zip?download=true Which gives me an error. Maybe you meant this one ? https://huggingface.co/IAHispano/Applio/resolve/main/Compiled/ApplioV3.0.6.zip?download=true To Reproduce Go to 'Applio 3.0.6 Release page' Click on precompiled See error Expected behavior I wanted to download the precompiled version of the project ApplioV3.0.6.zip from huggingface by clicking the link yeah we are doing some changes, check the readme for the links K, was just relating in case you weren't aware of it. The link for the precompiled directory in the readme page is also a 404. Fixed
gharchive/issue
2024-02-13T17:41:53
2025-04-01T06:37:04.067683
{ "authors": [ "aitronssesin", "blaise-tk", "cheesypy" ], "repo": "IAHispano/Applio", "url": "https://github.com/IAHispano/Applio/issues/289", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1686673335
OPEN VPN Please implement a variable to decide whether OpenVPN setup is needed or not. And use the OpenVPN deployment dependent of that variable (default should be FALSE i think) Hi @irmsan, agreed! Could you please take a look at PR? It adds a new setup_openvpn to the bastion's variables and is used only for openVPN tasks. Will that resolve this issue? Let me know.
gharchive/issue
2023-04-27T11:41:02
2025-04-01T06:37:04.106071
{ "authors": [ "irmsan", "jacobemery" ], "repo": "IBM/Ansible-OpenShift-Provisioning", "url": "https://github.com/IBM/Ansible-OpenShift-Provisioning/issues/118", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
806735908
Automated e2e integration tests for the fhir-notification module While reviewing https://github.com/IBM/FHIR/pull/1932 I noted that we really don't have any coverage of our fhir-notification module in our CI pipeline. Way back when we added NATS.io support for notifications, we did the work to spin up the NATS cluster as part of our docker-compose env (with db2), but we never did circle back around to adding automated tests for it: From build/README-DB2.md: Note: If you are testing NATS notifications, invoke the NATS subscriber via node fhir-server-test/src/test/nodejs/nats-subscriber. If this is your first time, install the dependencies first by installing Node.js (if not already installed) and running (cd fhir-server-test/src/test/nodejs && npm install). I'm hoping we can expand on that and add automated tests that cover the create, update (and patch), and delete notification events within our existing CI. We should probably upgrade the NATS dependency to 2.0 while we're at it... I added Kafka (as it's the easiest to add, if we want to add nats, I can add that too)
gharchive/issue
2021-02-11T20:59:40
2025-04-01T06:37:04.109314
{ "authors": [ "lmsurpre", "prb112" ], "repo": "IBM/FHIR", "url": "https://github.com/IBM/FHIR/issues/1936", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
348424866
Simplify mappings At the moment there are different mappings for different loading mechanisms: IGC REST API mappings istool mappings IA REST API mappings Would be better if there were just one consistent way of doing mappings, and we transparently translate that in the background to the form that we might require for a particular scenario (otherwise user needs to configure the same mapping multiple times across each area). ... or simply recommend an approach which means defining key attributes to be mapped in a separate vars file, and leave it to the playbook itself to determine how to apply those mappings? (That way users that define the target values for mapping don't need to be aware of / worry about the source values?)
gharchive/issue
2018-08-07T17:53:52
2025-04-01T06:37:04.111463
{ "authors": [ "cmgrote" ], "repo": "IBM/ansible-role-infosvr-import-export", "url": "https://github.com/IBM/ansible-role-infosvr-import-export/issues/12", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
559126674
fix(floating-menu): add floating menu container to Storybook Closes #576 Summary To get the floating menu to scroll together with the page, it should be attached to its parent element instead of <body> itself. To do that, you just need to add data-floating-menu-container to the element you want the menu to attach to (in this case the storybook container) Acceptance Test (how to verify the PR) Verify it on ?path=/story/watson-iot-table--stateful-example-with-secondary-title Please not that our storybook has a wrapping element with the styling position: fixed wrapping the stories, so the menu offset calculation fails sometimes. It's not a problem in normal apps. If normal apps don't use position: fixed then maybe we shouldn't use it by default in storybook? I'm seeing an issue when I scoll the page and then click on overflow menu that the overflow menu is not positioned correctly. Now this could be caused by what you described here... but, how can we be sure, and can we validate this? Should we try to turn off the position fixed in the story so that we can validate the correct behaviour will happen? @stuckless thanks for going through this. In our case the 'position:fixed' on storybook is from the 'centered' addon. My recommendation is to remove that, and with that at the same time we get the same experience of Carbon (they don't center the components in the stories) and also avoid positioning problems like this. (@davidicus ). I updated this PR removing the centered addon for the specific story and it fixes the offset problems. You can test it here: https://deploy-preview-870--carbon-addons-iot-react.netlify.com/?path=/story/watson-iot-table--stateful-example-with-secondary-title If we implement it storybook-wise, this also eliminates the need of adding data-floating-menu-container to our storybook container. Also, as you said, this doesn't fix the problem downstream but it helps with fixing it. I don't have access to the downstream implementation to check for it. It is not really well documented on Carbon's component but it should be the choice of who's developing an app wether to attach an overflow menu to the body (should be 99.9% of the cases) or to another element so the positioning works correctly. Also, given how the floating menu positioning is implemented, it cannot be arbitrarily added to the best available parent, like the table itself in our case. @enricoberti Thanks for looking into this. So, I can confirm it appears to work ok in my browser, I guess the original bug was opened up against this only happening on on an ipad. I don't have an ipad (here) for testing... have you verified that even with your most recent change that it behaves the same on ipad vs desktop. Nothing to do with your change :) but these kinds of behaviours scare me, a little, since it's like a mine field for the downstream consumer. ie, when do I set this, where, how, etc :( @stuckless yes, verified on the ipad too :) i know how you feel with that - it should actually be documented better also on the Carbon side of things :tada: This PR is included in version 2.39.3 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2020-02-03T14:40:17
2025-04-01T06:37:04.120398
{ "authors": [ "enricoberti", "stuckless", "tay1orjones" ], "repo": "IBM/carbon-addons-iot-react", "url": "https://github.com/IBM/carbon-addons-iot-react/pull/870", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
380392525
Add @carbon/motion package with timing information This issue is for creating the @carbon/motion package with SCSS and JS support for easing curves for components. Will need the values from @shixiedesign when she has the chance! Easing Productive Expressive Standard easing cubic-bezier(0.2, 0, 0.38, 0.9) cubic-bezier(0.4, 0.14, 0.3, 1) Entrance easing cubic-bezier(0, 0, 0.38, 0.9) cubic-bezier(0, 0, 0.3, 1) Exit easing cubic-bezier(0.2, 0, 1, 0.9) cubic-bezier(0.4, 0.14, 1, 1)
gharchive/issue
2018-11-13T19:44:46
2025-04-01T06:37:04.123623
{ "authors": [ "joshblack", "shixiedesign" ], "repo": "IBM/carbon-elements", "url": "https://github.com/IBM/carbon-elements/issues/65", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
622123478
Commit latest guptadiv:master to master Commit latest guptadiv:master to master WKC patch
gharchive/pull-request
2020-05-20T22:29:37
2025-04-01T06:37:04.124483
{ "authors": [ "guptadiv" ], "repo": "IBM/cloud-pak", "url": "https://github.com/IBM/cloud-pak/pull/267", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1566100538
Improve our use of tf.Dataset In the TFGNN model we are using tf.Dataset to make a data processing pipeline. It has certain advantages and disadvantages that we are bumping up into. In this issue, I'm gathering details so that we can have a complete picture of the situation. The good Datasets are basically required (as I understand it) to split your data on multiple GPUs. So we would at least need a Dataset at some stage of the pipeline. Datasets make it very easy to work with individual graphs and then batch them into batched graphs when the time comes for batched data. This greatly improves the maintainability of some parts of the code. It is a standard tensorflow API for data so it is fully featured. It makes it easy for us to mix the definitions dataset with the proof states dataset. It gives us a convenient way to cache the data in memory (or in files). To the extent that we can cache our processed dataset in memory, it makes it really fast to work with the dataset over hundreds of epochs. The bad Our current use of tf.Dataset doesn't scale well and is slow. Ideally we wouldn't need to worry about shuffling or caching the data in the dataset and instead let our loader handle this, but the problem is that processing the data is too slow. (See the observations section below for some thoughts on what is slow.) The dataset needs to be compiled. This is fine if you only have one super dataset pipeline, but was bad when we tried in the predict server to make a new dataset for every new definition. Observations Ideally we would like this dataset processing to be fast enough that we don't need to cache or shuffle the data outside of the loader. But this may also not be practical and we need to look into better ways to cache and shuffle the preprocessed data. I haven't done exact timings, but processing the 70,000 definitions seems to take a lot longer than the 500,000 proof states. This could be either because the definitions are much larger graphs or because there is some additional step to the definition processing which is making it slow. Figuring out what steps are slow could make a big difference in speeding up the code without "throwing the baby out with the bathwater". We also have some abilities to process the graphs outside the dataset. This was required for the predict server. We should compare timing of the two approaches, and maybe considering doing some of the processing outside the dataset. I'm experimenting right now with recomputing definitions between embeddings. I'm not using the dataset pipeline, but I wonder if I should. I guess it is a good way to compare the speeds of both approaches. There are whole articles on optimizing the dataset pipeline that we could look into. How would we like to in general want the caching to behave? Either, we can load all the data into the memory in the format ready for the network (but then, there can be the scaling issue if the data don't fit into memory), or we want to reload all the data each epoch (which can be slow). Or are there any other options? (like saving a cache to hard drive in TF format, I am not sure how feasible it would be). I would like to first properly understand our aim, and then we can try to figure out the technical details of whether we want to use tf.Dataset, or not. Note that the loader basically offers random access to the datapoints (but it is computing them every time they are accessed). To me, the basic requirements of training are like this: During one training epoch, we train on randomly generated batches (subsets) of the dataset. A batch is basically the size of GPU memory. At some point, we will no longer be able to hold the entire training data into RAM. Hence, we will have to load it in batches. These batches can either be the same size of the GPU-batch, or a more granular 'RAM-batch'. The loader will have to be fast enough such that one batch can be fetched while the previous batch is being processed by the GPU. (It's hard to imagine this being a problem, the loader seems relatively fast now.) To me, it seems that the proper way of dealing with this is as follows: . When training starts, we load the dataset into mmap memory (note that this is an operation that takes minimal memory regardless of the size of the dataset, this is the beauty of Captn Proto). Then, we calculate an index that contains the root nodes of all proof states and definitions we want to train on. This index is kept in memory permanently, but should be fairly small. 2. While epoch nis being trained, the next epochn+1is being prepared: The index from (1) is randomly permuted and split into batches. This should be pretty quick, but if it is not, we have the entire epoch to calculate it. 3. While batchiof epochnis being trained, batchi+1` is fetched from the Captn Proto dataset. That is, we calculate the forward closures of all the root nodes in the batch and load them into a Numpy array or whatever other datastructure. 4. We pray that fetching a batch is fast enough to keep the GPU busy. But if it is not, I guess we can parallelize this. I see, so we would like the Dataset to look ahead, and prepare a batch it was not asked about yet... I would have to look more into Dataset to see if it is happening by default, or how to do it. Also, at some point, we were considering moving the graph loader into Cython in case that would be a bottleneck (but I think we concluded there were more serious speed issues). Note that if it is indeed the case that we need tf.Dataset in order to parallelize over multiple GPU's, then I propose this scheme: A tf.Dataset basically corresponds to a RAM-batch. This is the largest size that we are willing to load into RAM. The tf.Dataset can the split this batch into smaller GPU-batches. The only other alternative I see is to load the entire Capt'n Proto dataset into a tf.Dataset. But if we run out of memory, then the tensorflow code will have to be responsible for swapping part of the tf.Dataset to disk. Does this functionality exist? (I would think so, because surely we are not the only ones with datasets that exceed RAM?) Looking though the tensorflow API, it looks to me like a lot of what I describe can be easily done using a combination of the prefetch functionality and the from_generator functionality. I spend some time digging through the codebase and through the tensorflow documentation. My impression is that using tf.data.Dataset is a good idea in general, and there is no reason why we can't have our cake and eat it too. Here is what I would suggest as a 'plan of attack': Get rid of the old C++ loader code (it clutters up the repo, and the next steps will break it) Is the tfgnn model now superior in every way to the tf2 model? If so, let's get rid of the tf2 model (if we ever need it again, it is in git's history). Let's merge tfgnn.dataset.Dataset and loader.py_data_server.DataServer into one class. This will save a lot of transformations in the pipeline and simplify the code. I don't really see any reason why we need two classes here. Move the shuffling of the data as early into the pipeline as possible. Ideally at the first step, where a proof state and definition is still a single root node. The shuffle function requires a buffer, which scales linearly with the size of the data in the buffer. Hence, it is much cheaper to do this early in the pipeline. When the shuffling it the first step in the pipeline, it does not have to take any memory at all. Currently, the shuffling is happening way to late. Remove any calls to cache. There are currently multiple in the pipeline. At most, there should be one call, but ideally none at all. Experiment with different prefetch buffer sizes (instead of AUTOTUNE). Use the TF Profiler to see if we have any bottlenecks. If there are still bottlenecks, experiment with adding num_parallel_calls to batch and any remaining map calls in the pipeline. If there are still bottlenecks, implement the forward closure computation in Cython. If there are still bottlenecks, use the TF interleave function to feed data into the pipeline at parallel. If there are still bottlenecks, go sit in the corner and cry. While I'm not very up-to-date on the code-level details of how the tf.data.Dataset is being used right now, let me just point out that tf.data.Dataset.cache supports on-disk caching too, and if I recall correctly this included "fancy" features such as sharding (for when e.g. you need to split the data into multiple files because otherwise it's too big or too slow to read a batch from a single disk). So in principle one should be able to mostly keep the same pipeline once the data becomes larger than the available RAM memory, except for some finetuning of the on-disk-caching. My impression is that trying to achieve this from scratch (handling serialization and deserialization, prefetching, sharding, etc) would be a lot of work and also mostly reinventing the wheel... Attached is a diagram of my proposal. I suggest to entirely ditch the idea of a dataserver. This brings way to much overhead and complexity with it. Instead, I'd go for an entirely functional approach. stream-proposal.pdf Thanks for looking into this everyone, especially @LasseBlaauwbroek! Is the tfgnn model now superior in every way to the tf2 model? If so, let's get rid of the tf2 model (if we ever need it again, it is in git's history). Mostly. There were still a few more comparison experiments I wanted to run, but I could do those on v13 or v14 in old branches. I personally have no intention of going back to the TF2 model. All are new features are in in the TFGNN model. Let's merge tfgnn.dataset.Dataset and loader.py_data_server.DataServer into one class. This will save a lot of transformations in the pipeline and simplify the code. I don't really see any reason why we need two classes here. I'm not sure I understand the proposal here, but I'm open to it. Remove any calls to cache. There are currently multiple in the pipeline. At most, there should be one call, but ideally none at all. Yes we probably don't need all the calls to cache we have. One is because we split the data in the dataset and we need a cache before the split to prevent recomputing all the test data for the training data pipeline. If we handle training and validation data better, then this will go away. I'm not sure I understand all your proposals in detail yet, Lasse, but here are some high level things I'd like from a pipeline in order of preference: Robust to tricky-to-debug errors (for example indexing errors in graphs). Easy to experiment with ideas. Both you and I have ideas for other things we would like to add to training. If it is easy to add different data (and avoid bugs when doing so) that would be great and improve our scientific productivity. Scalable to larger data (and remain runnable on at least 2 GPUs). Fast. It doesn't have to be lightning fast, but it would be nice for it to be fast enough to get results in a few days of training. I think we can easily do all of those things.
gharchive/issue
2023-02-01T13:17:05
2025-04-01T06:37:04.146339
{ "authors": [ "LasseBlaauwbroek", "fidel-schaposnik", "jasonrute", "mirefek" ], "repo": "IBM/graph2tac", "url": "https://github.com/IBM/graph2tac/issues/96", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2224865294
Removing ping Issue: https://github.ibm.com/IBMPrivateCloud/roadmap/issues/62325. We need not to check response of host to the ping logs from running the job: Changing the hostname from cp-test-ibm-common-services.apps.cat.cp.fyre.ibm.com to cp-test-ibm-common-services.apps.cat.cp.fyre.ibm.com configmap/cs-onprem-tenant-config unchanged Given Custom Hostname: cp-test-ibm-common-services.apps.cat.cp.fyre.ibm.com Host is reachable. Proceeding further... Custom secret not configured Deleting old job of iam-custom-hostname if exists job.batch "iam-custom-hostname" deleteda Running custom hostname job job.batch/iam-custom-hostname created platform-auth-service is available. job.batch/iam-custom-hostname condition met successfully updated the custom hostname token_output Access token is present.
gharchive/pull-request
2024-04-04T08:39:15
2025-04-01T06:37:04.149080
{ "authors": [ "DaniyalMustafa" ], "repo": "IBM/ibm-common-service-operator", "url": "https://github.com/IBM/ibm-common-service-operator/pull/1908", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2632807521
Error while loading shared libraries: libfuse.so.2 IBM Kubernetes Cluster Kubernetes Version: 1.29.9 (Node with Ubuntu 24) ibm-object-storage-plugin: 2.2.32 Since updating our Kubernetes Nodes to Ubuntu 24 we get the following error in our container: MountVolume.SetUp failed for volume "pvc-..." : mount command failed, status: Failure, reason: Error mounting volume: s3fs mount failed: s3fs: error while loading shared libraries: libfuse.so.2: cannot open shared object file: No such file or directory The container stays in pending mode. The PVC however is bound. Example PVC: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc labels: billingType: hourly region: eu-de annotations: ibm.io/add-mount-param: complement_stat,compat_dir ibm.io/auto-create-bucket: 'false' ibm.io/auto-delete-bucket: 'false' ibm.io/bucket: BUCKET_NAME ibm.io/endpoint: https://s3.direct.eu-de.cloud-object-storage.appdomain.cloud ibm.io/secret-name: s3-access-secret spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: ibmc-s3fs-smart-perf-regional Problem still persists. Also tested on a Cluster Node with Kubernetes version 1.31.1_1527 with Ubuntu 24. PVC YAML: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc annotations: ibm.io/add-mount-param: complement_stat,compat_dir ibm.io/auto-create-bucket: 'false' ibm.io/auto-delete-bucket: 'false' ibm.io/bucket: BUCKET_NAME ibm.io/endpoint: https://s3.direct.eu-de.cloud-object-storage.appdomain.cloud ibm.io/secret-name: SECRET_NAME spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: ibmc-s3fs-smart-perf-regional Status is bound and PersistentVolume is created, but the Pod will not start: apiVersion: v1 kind: Pod metadata: name: pvc-test spec: volumes: - name: data persistentVolumeClaim: claimName: test-pvc containers: - name: pvc-size-test image: public.ecr.aws/docker/library/busybox:stable command: ['/bin/sh', '-c'] args: ['while true; do du -sh /data; sleep 10;done'] volumeMounts: - mountPath: '/data' name: data resources: {} This still leads to the error FailedMount and a pod stuck in ContainerCreating: MountVolume.SetUp failed for volume "pvc-3a8a9b35-75a5-4289-8aaa-fcd38143bcb3" : mount command failed, status: Failure, reason: Error mounting volume: s3fs mount failed: s3fs: error while loading shared libraries: libfuse.so.2: cannot open shared object file: No such file or directory Just switching to Ubuntu 20 as the base OS for the Nodes resolves this error. Ubuntu 20 is deprecated and will soon stop to be be supported by IBM Kubernetes Clusters, we need a solution for this!
gharchive/issue
2024-11-04T13:22:14
2025-04-01T06:37:04.157937
{ "authors": [ "baal-lgln" ], "repo": "IBM/ibmcloud-object-storage-plugin", "url": "https://github.com/IBM/ibmcloud-object-storage-plugin/issues/152", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
467423293
Allow for authentication for configured proxy This PR fixes issue #44. @lpatino10 @marloncarvalho you also need to configure git because your user doesn't match your commit I see! It's using my enterprise username and e-mail. Sorry. I've changed it. What should I do now? Just commit something and push it? Fork again? Nevermind! I didn't know I can change the Author with rebase and amend. Fixed. @lpatino10 @germanattanasio
gharchive/pull-request
2019-07-12T13:47:07
2025-04-01T06:37:04.159684
{ "authors": [ "germanattanasio", "marloncarvalho" ], "repo": "IBM/java-sdk-core", "url": "https://github.com/IBM/java-sdk-core/pull/45", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
829703321
Add getQueryParam utility method to support pagination This PR adds a small utility method to extract the value of a query parameter from a URL string. This will be used in the pagination support for Node. Checklist [x] npm test passes (tip: npm run lint-fix can correct most style issues) [x] tests are included [ ] documentation is changed or added :tada: This PR is included in version 2.10.0 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2021-03-12T02:43:49
2025-04-01T06:37:04.163027
{ "authors": [ "ibm-devx-automation", "mkistler" ], "repo": "IBM/node-sdk-core", "url": "https://github.com/IBM/node-sdk-core/pull/128", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
613448346
fix the wrong path issue Upgrade operator docs moved from /design to /user but docs have not been changed. /assign @danielxlee
gharchive/pull-request
2020-05-06T16:23:49
2025-04-01T06:37:04.163847
{ "authors": [ "xjtustt" ], "repo": "IBM/operand-deployment-lifecycle-manager", "url": "https://github.com/IBM/operand-deployment-lifecycle-manager/pull/375", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
325029802
http should redirect to https for dev.rotisserie.tv http://dev.rotisserie.tv is the default nginx landing page. the Thanks slide at https://blog.eggshell.me/rotisserie-talk/#39 links to dev.rotisserie.tv which by default is interpreted as http when you copy and paste it in. http://dev.rotisserie.tv It should be a direct clicky link to https://dev.rotisserie.tv Additionally http://dev.rotisserie.tv should redirect to https://dev.rotisserie.tv. [x] fix slides with direct link to https://dev.rotisserie.tv [ ] make http://dev.rotisserie.tv redirect to https://dev.rotisserie.tv Dev is for staging. Right now we are using Cert-Manager + Kube-Lego in the prod cluster, where dev also lives, which causes the issues with redirects. Cert-Manager + the IBM ingress controller should be controlling all traffic for the domain, however Kube-Lego also sees the annotation and is setting up ingress resources for the domain. There is currently a bug with the Ingress Controller which keeps it from pulling records for the same host from two different ingress resources. Basically - As soon as PR #122 is merged and we re-deploy prod + dev + fortnite, we shouldn't run into this issue. The setup already assumes HTTP to HTTPS redirects. Went ahead and updated the last slide. Thanks @MHBauer confirm slide update. http redirects to https after removing kube-lego and switching everything over to cert-manager. confirm redirect. thanks guys. good job!
gharchive/issue
2018-05-21T19:22:00
2025-04-01T06:37:04.168752
{ "authors": [ "MHBauer", "eggshell", "mpetason" ], "repo": "IBM/rotisserie", "url": "https://github.com/IBM/rotisserie/issues/123", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1463431362
do not error out docker login although multicloudlab credential unknown Signed-off-by: Xinchun Liu xcliu@ca.ibm.com What this PR does / why we need it: Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes # Specify your PR reviewers: /cc @ashank07 @pgodowski with this, we can test build-tool / check-tool builds, will not be able to push to quay.io/multicloudlab though /lgtm
gharchive/pull-request
2022-11-24T14:28:24
2025-04-01T06:37:04.171929
{ "authors": [ "pgodowski", "xcliu-ca" ], "repo": "IBM/test-infra", "url": "https://github.com/IBM/test-infra/pull/496", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
421646003
new prefabs addition error I am trying to add new prefabs (or objects). I added intent and entities in the IBM cloud and also in the assets folder. But I am getting the error KeyNotFoundException. Can you please tell me the changes in the code to add new prefabs. @scottdangelo Can you please help me out with this? Sorry, but that is beyond the scope of this code pattern. You will need to do some research on Unity Prefabs. You can look in the Unity editor at the keys/values in the scene where the existing prefabs are, and get some clues as to how to modify for your purposes.
gharchive/issue
2019-03-15T18:14:44
2025-04-01T06:37:04.174189
{ "authors": [ "scottdangelo", "sidd-gupta" ], "repo": "IBM/vr-speech-sandbox-cardboard", "url": "https://github.com/IBM/vr-speech-sandbox-cardboard/issues/58", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1328686163
add notebook for troubleshooting in 4.5.1 to migrate custom monitor i… …nstances. Target issue: https://github.ibm.com/aiopenscale/tracker/issues/27047 Change sets: A notebook to migrate monitor instances for custom monitor definition created in 4.0.X @kishore-patel - Please review this notebook. Thanks. @arsuryan @kishore-patel Could either of you please merge/rebase this PR?
gharchive/pull-request
2022-08-04T14:00:50
2025-04-01T06:37:04.176469
{ "authors": [ "akirafujiu", "arsuryan" ], "repo": "IBM/watson-openscale-samples", "url": "https://github.com/IBM/watson-openscale-samples/pull/45", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1466469102
Need timestamps on user build output/log To troubleshoot performance problems, it will be helpfull to have timestamps on the log output of the user build: To better see if the time is spent in Z Open Editor or Zowe Explorer or RSE API calls,... @ogauneau we now have timestamps in the user build log when the Z Open Editor v3.1.0 log level is set to DEBUG. Same setting for both logs.
gharchive/issue
2022-11-28T13:54:04
2025-04-01T06:37:04.178057
{ "authors": [ "ogauneau", "phaumer" ], "repo": "IBM/zopeneditor-about", "url": "https://github.com/IBM/zopeneditor-about/issues/292", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1740400524
How to optimise via IoU if too many similar objects are detected? Hey, I always have a very large bbox when I query text-prompt "a house" , meanwhile single ones are detected correctly. How to avoid the large bbox, at least I want to get an approach generally to filter the large ones. thanks You can add NMS after the output box @rentainhe Does this repo provide an implementation of it? @rentainhe Does this repo provide an implementation of it? Please refer to here: https://github.com/IDEA-Research/Grounded-Segment-Anything/blob/8124fe737dc877ec49a0881785119fe222a4c868/automatic_label_simple_demo.py#L104 yeah, thank you I have seen it
gharchive/issue
2023-06-04T14:19:22
2025-04-01T06:37:04.191123
{ "authors": [ "XinyueZ", "rentainhe" ], "repo": "IDEA-Research/GroundingDINO", "url": "https://github.com/IDEA-Research/GroundingDINO/issues/131", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1962507719
Fix MetaClass warning for UImGuiSettings::ImGuiInputHandlerClass Wed Oct 25 17:24:39 PDT 2023 Warning LogClass Property StructProperty UImGuiSettings::ImGuiInputHandlerClass defines MetaData key "MetaClass" which contains short type name "ImGuiInputHandler". Suggested pathname: "/Script/ImGui.ImGuiInputHandler". Module:ImGui File:Private/ImGuiModuleSettings.h Looks like the relevant code was adding in 5.1 https://github.com/EpicGames/UnrealEngine/blob/5de4acb1f05e289620e0a66308ebe959a4d63468/Engine/Source/Runtime/CoreUObject/Private/UObject/Class.cpp#L3733C8-L3733C8 https://github.com/EpicGames/UnrealEngine/commit/43d504502bf8c100aa52b799e9dfb721c296d5ed
gharchive/pull-request
2023-10-26T00:35:43
2025-04-01T06:37:04.194411
{ "authors": [ "DoubleDeez" ], "repo": "IDI-Systems/UnrealImGui", "url": "https://github.com/IDI-Systems/UnrealImGui/pull/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1211557223
🛑 IDR (well 1029401) is down In f66b9f4, IDR (well 1029401) (https://idr.openmicroscopy.org/webclient/?show=well-1029401) was down: HTTP code: 502 Response time: 84 ms Resolved: IDR (well 1029401) is back up in 5d80611.
gharchive/issue
2022-04-21T21:11:31
2025-04-01T06:37:04.209335
{ "authors": [ "snoopycrimecop" ], "repo": "IDR/upptime", "url": "https://github.com/IDR/upptime/issues/321", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2459185544
🛑 IDR (well 1046336) is down In bd090c8, IDR (well 1046336) (https://idr.openmicroscopy.org/webclient/?show=well-1046336) was down: HTTP code: 502 Response time: 83 ms Resolved: IDR (well 1046336) is back up in a7928bb after 27 minutes.
gharchive/issue
2024-08-10T17:00:43
2025-04-01T06:37:04.211839
{ "authors": [ "snoopycrimecop" ], "repo": "IDR/upptime", "url": "https://github.com/IDR/upptime/issues/4205", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
414341947
App crash on unknown suggester Songs starred from a suggester that is no longer available crashes the app. As the goal of starred songs is to remember interesting songs and the app can potentially be used on different bot instances with incompatible providers, the best solution would probably be trying to use the started song, and if no suitable provider is found, the song is simply searched on available providers. will add a redirect to a search on all providers once #42 is implemented
gharchive/issue
2019-02-25T22:39:58
2025-04-01T06:37:04.297427
{ "authors": [ "FelixGail", "IIIuminator" ], "repo": "IIIuminator/EnQ", "url": "https://github.com/IIIuminator/EnQ/issues/41", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
260882483
Add support for validating "version", "@language", and "related" props Add support for validating "version", "@language", and "related" properties. For "related", a partial validation is done, only on embedded properties that are actually included, so that the related object may be minimal and only declare properties that distinguish the related version from the entity core to the test. Related objects are not fetched. Resolves #70. LGTM
gharchive/pull-request
2017-09-27T08:04:59
2025-04-01T06:37:04.305028
{ "authors": [ "mgylling", "ottonomy" ], "repo": "IMSGlobal/openbadges-validator-core", "url": "https://github.com/IMSGlobal/openbadges-validator-core/pull/172", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1162556823
data-quality-report: Improvements Now we only do stats on public data anyway, that makes no sense. Build in background and store in cache (instead of build on demand). Building once per day is fine. We are going to add more stats so build times will only get longer - currently 10s on my laptop. Also noting there is an inefficiency here (we look up how many public projects there are repeatedly when building) but as it builds in the background I'm not bothered at this stage.
gharchive/pull-request
2022-03-08T11:44:18
2025-04-01T06:37:04.319387
{ "authors": [ "odscjames" ], "repo": "INDIGO-Initiative/database-app", "url": "https://github.com/INDIGO-Initiative/database-app/pull/113", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
951567501
Add "advanced pipeline", missing value, imputing module, maybe more So this is an ambitious goal, with some moving parts. We need to reevaluate depending on how fast we progress with other things. This does not really fit in any existing module and this seems to long to put it in module 1. Here are a few things that we could show: advanced pipeline: as currently in the wrap-up quiz 1. Need more details. missue value imputing Maybe we could present splines and dealing with time features in this module: https://scikit-learn.org/dev/auto_examples/applications/plot_cyclical_feature_engineering.html This will require scikit-learn 1.0 which is not released yet though. I created a MOOC 3.0 milestone, to make it less of a priority that MOOC 2.0 which should primarily be focused on a improving the existing material. We can always re-milestone issues based on progress.
gharchive/issue
2021-07-23T13:11:35
2025-04-01T06:37:04.349471
{ "authors": [ "lesteve", "ogrisel" ], "repo": "INRIA/scikit-learn-mooc", "url": "https://github.com/INRIA/scikit-learn-mooc/issues/414", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1832368602
🛑 IO.GT MAIN SITE is down In 94a0dc5, IO.GT MAIN SITE (https://io.gt) was down: HTTP code: 520 Response time: 186 ms Resolved: IO.GT MAIN SITE is back up in b5df6cc.
gharchive/issue
2023-08-02T03:35:32
2025-04-01T06:37:04.387186
{ "authors": [ "aalonzolu" ], "repo": "IOGT/upptime", "url": "https://github.com/IOGT/upptime/issues/467", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2022769643
1.20.2 Support? When will version 1.20.2 be supported? pls add 1.20.4 supported pls add 1.20.6 supported pls supported
gharchive/issue
2023-12-03T23:37:13
2025-04-01T06:37:04.388570
{ "authors": [ "TheFaik", "konsheng", "lipind" ], "repo": "IPECTER/LighterAPI", "url": "https://github.com/IPECTER/LighterAPI/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1880542395
🛑 Main is down In 28284e5, Main (https://ipmgroupuk.com) was down: HTTP code: 429 Response time: 350 ms Resolved: Main is back up in a5ba690 after 11 minutes.
gharchive/issue
2023-09-04T15:50:58
2025-04-01T06:37:04.391757
{ "authors": [ "GarethWright" ], "repo": "IPMGroupLtd/Uptime", "url": "https://github.com/IPMGroupLtd/Uptime/issues/50", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
666410768
GrowTogether Application for Registering Team to CODE CAMP 1.0 This pull request template helps you complete an application to the CODE CAMP 1.0 HACKATHON. Use the checklist below to verify that you have followed the instructions correctly. (Put x inside [] like [x]) Select the type of Pull Request [] Registration Pull request. [x] Submission Pull request. Checklist ✅ [] I have read the instructions on the README file before submitting my application and gone through CODE_OF_CONDUCT. [] I have Submitted my team's details by making a folder of my team as instructed in How to send Pull Request page. [] I have used the Markdown file template to add my information for the Hackathon. [] I have given all the necessary details as mentioned in template index.md file and the details are correct and best to my knowledge. [] I understand that a reviewer will merge my pull request after examining it or ask for changes in case needed. [] I understand I should not tag or add a reviewer to this Pull Request. [] I understand the Details added to the template will be used as a means of communication at the time of result declaration. [] I have added the event to my calendar. @pranavi79 , instructions not followed. The file made are in Teams folder, Needs to be below Team folder @pranavi79 please follow instructions again. Delete this repo, fork it again and NOW - drop your folder inside FINALIST folder and these is already a file - INDEX.MD, update it
gharchive/pull-request
2020-07-27T16:07:41
2025-04-01T06:37:04.504068
{ "authors": [ "Uyadav207", "pranavi79", "raghavg27" ], "repo": "ISTESRMNCR/CODE-CAMP-2020", "url": "https://github.com/ISTESRMNCR/CODE-CAMP-2020/pull/101", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1435913878
🛑 Gas is down In daf00fb, Gas (https://app.albanesi.com.ar/Gas/assets/img/backgrounds/1.jpg) was down: HTTP code: 0 Response time: 0 ms Resolved: Gas is back up in f5b6f16.
gharchive/issue
2022-11-04T11:15:29
2025-04-01T06:37:04.512976
{ "authors": [ "Juanro22" ], "repo": "ITAlbanesi/uptime", "url": "https://github.com/ITAlbanesi/uptime/issues/77", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2683442289
Generated code blocks in README don't have syntax highlighting Literate is generating code blocks that are marked with @example rather than julia in the docs/README, which means they don't have syntax highlighting. Investigate how to change that (see https://fredrikekre.github.io/Literate.jl/v2/outputformats/#Configuration, https://documenter.juliadocs.org/stable/man/syntax/#reference-at-example). Probably this will require generating the README.md and docs/src/index.md from the examples/README.jl separately with different Literate.markdown commands, and also making the Literate check workflow check both of those. See also #14.
gharchive/issue
2024-11-22T14:43:47
2025-04-01T06:37:04.545037
{ "authors": [ "mtfishman" ], "repo": "ITensor/ITensorPkgSkeleton.jl", "url": "https://github.com/ITensor/ITensorPkgSkeleton.jl/issues/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1091748987
🛑 Events is down In 549fdf3, Events (https://events.pt.ivao.aero) was down: HTTP code: 0 Response time: 0 ms Resolved: Events is back up in 0637b80. Resolved: Events is back up in 0637b80.
gharchive/issue
2022-01-01T05:48:27
2025-04-01T06:37:04.560079
{ "authors": [ "pt-hq" ], "repo": "IVAO-Portugal/status-page", "url": "https://github.com/IVAO-Portugal/status-page/issues/1192", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1168527736
🛑 Website is down In 8f63ed5, Website (https://pt.ivao.aero/portal) was down: HTTP code: 0 Response time: 0 ms Resolved: Website is back up in bbc5c8a.
gharchive/issue
2022-03-14T15:16:23
2025-04-01T06:37:04.562403
{ "authors": [ "pt-hq" ], "repo": "IVAO-Portugal/status-page", "url": "https://github.com/IVAO-Portugal/status-page/issues/1321", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1276126167
🛑 Events is down In 8bb9151, Events (https://events.pt.ivao.aero) was down: HTTP code: 404 Response time: 476 ms Resolved: Events is back up in 1464c6f.
gharchive/issue
2022-06-19T17:45:29
2025-04-01T06:37:04.564681
{ "authors": [ "pt-hq" ], "repo": "IVAO-Portugal/status-page", "url": "https://github.com/IVAO-Portugal/status-page/issues/1508", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1065168489
🛑 Website is down In 109a5bc, Website (https://pt.ivao.aero/portal) was down: HTTP code: 0 Response time: 0 ms Resolved: Website is back up in 3e8e84c.
gharchive/issue
2021-11-28T02:26:53
2025-04-01T06:37:04.567252
{ "authors": [ "pt-hq" ], "repo": "IVAO-Portugal/status-page", "url": "https://github.com/IVAO-Portugal/status-page/issues/742", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1400017695
feat(devcontainer): enable dev environments in GH Codespaces and VS Code remote containers This project is marked as open to contribution for Hacktoberfest, and as such I figured it would be desirable for contributors to have a simplified means of getting up and running with the developer setup. With the addition of the devcontainer configuration, this project is now compatible with GitHub Codespaces as well as VS Code Remote Containers. Contributors who use one of these options now get all of the dev setup steps completed for them, and you get peace of mind that contributions will come from setups with the correct usage of eslint, prettier, etc. EDIT: Demo https://vimeo.com/757666222 @MrRobz yes, it's incredibly cool - I have already recorded a video, but it's 25Mb and apparently GH will only let you attach videos of up to 10Mb - I'm in the process of re-encoding it so it's only 10Mb :sweat: (i'll attach once that's done). @MrRobz I've attached a link to vimeo which is a long demonstration of how this works - the video is being transcoded still, but should be watchable in a few minutes :+1: @MrRobz I've attached a link to vimeo (https://vimeo.com/757666222) which is a long demonstration of how this works - the video is being transcoded still, but should be watchable in a few minutes 👍 That was pretty amazing. One small suggestion. If there were some sample svg icons is a folder already present in the code-space it would make developing and testing the app lot more easier. I'm thinking if we could have these icons in these links downloaded and present in a folder when opening the code space. https://github.com/tailwindlabs/heroicons/tree/master/src/20/solid https://github.com/Remix-Design/RemixIcon/tree/master/icons @MrRobz I just looked into this, and I think it's probably a little out of scope for the purpose of this PR due to the fact that collection names are stored in IndexDB so there's no way for me to, during the container build, establish a new icon collection and add files to it. I think you could modify the application code to do an initial scan of the icon-library directory and automatically create some collections based on any directories present there, and that would be the way you support pre-seeding the application state with some libraries. Basically, I think it would be super helpful if the application could be defaulted to have some collections, but I think that's probably a bit further than this PR is meant to go. If you were able to merge this PR in, perhaps it could be a good first issue item for someone else in Hacktoberfest to make the required application code changes to support seeding the collection database and then dropping some svgs into the file system at container launch :+1: @MrRobz I just looked into this, and I think it's probably a little out of scope for the purpose of this PR due to the fact that collection names are stored in IndexDB so there's no way for me to, during the container build, establish a new icon collection and add files to it. I think you could modify the application code to do an initial scan of the icon-library directory and automatically create some collections based on any directories present there, and that would be the way you support pre-seeding the application state with some libraries. Basically, I think it would be super helpful if the application could be defaulted to have some collections, but I think that's probably a bit further than this PR is meant to go. If you were able to merge this PR in, perhaps it could be a good first issue item for someone else in Hacktoberfest to make the required application code changes to support seeding the collection database and then dropping some svgs into the file system at container launch 👍 I understand. Lets do something simple. Clone 2 other repos with icons in it to a folder in say home or desktop in the code-space machine. I tried adding these steps to the dockerfile but these folders aren't showing. Could you help with this pls. RUN cd /home && git clone https://github.com/Remix-Design/RemixIcon.git RUN cd /home && git clone https://github.com/tailwindlabs/heroicons.git @MrRobz Sounds like a great way to get most of the way there! I believe your changes should actually be fine, but you've cloned to the /home directory, and not /home/node, which is where your file explorer is open to - I think you should probably be cloning to /home/node. Also, I believe the -C flag to git clone will let you skip having to cd into the clone directory 👍 @MrRobz Sounds like a great way to get most of the way there! I believe your changes should actually be fine, but you've cloned to the /home directory, and not /home/node, which is where your file explorer is open to - I think you should probably be cloning to /home/node. Also, I believe the -C flag to git clone will let you skip having to cd into the clone directory 👍 That made it work. Thank you
gharchive/pull-request
2022-10-06T16:54:56
2025-04-01T06:37:04.603094
{ "authors": [ "MrRobz", "andrewbrey" ], "repo": "Icon-Shelf/icon-shelf", "url": "https://github.com/Icon-Shelf/icon-shelf/pull/135", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
201193252
How to update claims on refresh tokens refresh? it's not an issue, its a question. Setup Identity server- HybridAndClientCredentials scope - IdentityServerConstants.StandardScopes.OpenId, IdentityServerConstants.StandardScopes.Email, IdentityServerConstants.StandardScopes.Profile, IdentityServerConstants.StandardScopes.OfflineAccess, "api","offline_access" UpdateAccessTokenClaimsOnRefresh=true, AllowOfflineAccess = true, I was trying to update the custom claims on the client, but couldn't update the claim to the identity server. It would be great if you could point me at right direction for updating the claim. whether the way i am trying to update the claim by refreshing token is right or not? i have claim named facility_id, which has to be changed when the user change the default facility. i have updated the facility_id in the client.but how to update it to the database through identity server. Data management is not our problem - go to your database and update it yourself ;) Also - if the data frequently changes - it is probably not a good candidate for a claim in a token. Thanks for the answer, i thought identity will take care of updating claim to database. Now it's clear.
gharchive/issue
2017-01-17T07:15:22
2025-04-01T06:37:04.626456
{ "authors": [ "leastprivilege", "sriram5052" ], "repo": "IdentityServer/IdentityServer4", "url": "https://github.com/IdentityServer/IdentityServer4/issues/705", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
823701085
WebKit WPE main/HEAD doesn't build using the master/HEAD of WPEBackend-FDO ... the same doesn't happen with WPEBackend-FDO 1.8.0 --- /home/bot/toolchain/sysroots/armv7at2hf-neon-poky-linux-gnueabi/usr/lib/pkgconfig/wpe-1.0.pc 2021-03-05 05:36:02.854345912 +0000 +++ /home/bot/toolchain_1.9.1/sysroots/armv7at2hf-neon-poky-linux-gnueabi/usr/lib/pkgconfig/wpe-1.0.pc 2021-03-05 00:56:37.717953845 +0000 @@ -5,7 +5,7 @@ Name: wpe-1.0 Description: The wpe library -Version: 1.9.1 +Version: 1.8.0 Requires: xkbcommon Cflags: -I${includedir}/wpe-1.0 Libs: -L${libdir} -lwpe-1.0 There are changes in the includes that could be the source of this errors. Here the differences between 1.8.1 and 1.9: https://paste.debian.net/1188147/. Probably related with f461fd4d306436bcefa0bdbf1821a191d7462c38. commit f461fd4d306436bcefa0bdbf1821a191d7462c38 Author: Adrian Perez de Castro <aperez@igalia.com> Date: Thu Nov 26 15:47:43 2020 +0200 Simplify public headers Remove unneeded inclusions of libwpe headers, preferring forward declarations of the needed types and make all the inclusions use double quoted paths (to prefer local header versions, as opposed to installed ones). The issue only affects the Tools/wpe/backends in WebKit so far. For example, you still can build WPE by disabling the tools (-DENABLE_TOOLS=OFF) ../../Tools/wpe/backends/HeadlessViewBackend.cpp:85:22: error: ‘wpe_view_activity_state_visible’ was not declared in this scope Full list of errors: https://paste.debian.net/1188149/ Can confirm this. For backends code in WebKit, a simple explicit include of <wpe/wpe.h> in ViewBackend.h works. This was because of the recent cleanups in the WPEBackend-fdo headers, which no longer end up including <wpe/wpe.h>; I think we should make <wpe/fdo.h> include it again so the expectations of existing applications stay the same. I'll make a PR later today.
gharchive/issue
2021-03-06T17:27:11
2025-04-01T06:37:04.640098
{ "authors": [ "aperezdc", "psaavedra", "zdobersek" ], "repo": "Igalia/WPEBackend-fdo", "url": "https://github.com/Igalia/WPEBackend-fdo/issues/141", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
70930135
WndProc &HF030& I tried to catch the maximize event but failed. Also I can't disable control box and buttons. Are you using the NuGet package? This should be fixed in the project, but I simply haven't pushed the update to NuGet yet. Yes, I am using NuGet package. Maybe I should wait for your updates or compile your source on my own, right? Thanks in advanced. Sorry for the delay, but MaterialSkin 0.2.1 has been pushed to NuGet. (Including the ability to disable the control box buttons)
gharchive/issue
2015-04-25T15:52:35
2025-04-01T06:37:04.645588
{ "authors": [ "IgnaceMaes", "NoobTW" ], "repo": "IgnaceMaes/MaterialSkin", "url": "https://github.com/IgnaceMaes/MaterialSkin/issues/45", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1977240298
is this hostable in replit? Environment asd Steps To Reproduce asdw Software Version asd Expected Behavior asd Actual Behavior asd Screenshots asd Severity Trivial Priority Low Type Other Reproducible [X] Yes [X] No Additional Information asd From what I know, the bot should work well on the Replit, it just requires extra configuration. Only the dashboard does not work on the Replit, It returns errors during deployment. However, I would like to point out that replit is not a supported hosting and errors that occur there will not be fixed.
gharchive/issue
2023-11-04T08:48:55
2025-04-01T06:37:04.688034
{ "authors": [ "IgorKowalczyk", "kordnddn" ], "repo": "IgorKowalczyk/majo.exe", "url": "https://github.com/IgorKowalczyk/majo.exe/issues/659", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
751355255
How to work remotely with a node behind a login server Is your feature request related to a problem? Please describe. I am working remotely with a cluster. I log into the login server, and then request a node, to which I then log in. I manage to run remotely on the server, and see plots and help files. When I connect to the node, I do manage to start radian and attach. However, plots and help are not displayed. I set the environment variable TMPDIR so that tempdir() points to a directory that is mounted on the login server. And that directory does work when used from the server. I can see that when I'm on a node, plots are generated inside that directory (and I can manually see them). But I can't have them automatically appear. I did try to do ssh port forwarding with -R when loging into the node from the server. That didn't help. I'm not I did it correctly. Describe the solution you'd like I'd like to be able to work remotely in that setup Describe alternatives you've considered See above Additional context You could manually start radian and attach, and the vscode status bar could show the pid of the attached session? Would you like to share ls.str(.vsc) in your attached R session? Below is the output. $HOME is my home directory, and $WORk is a work directory (I did search/replace to the output) attach : function () capture_str : function (object) check_null_dev : function () dataview_data_type : function (x) dataview_table : function (data) dir_extension : chr "$HOME/.vscode-R" dir_plot_history : chr "$WORK/seir_regression/TMP/RtmpvxGtqk/vscode-R/images" dir_session : chr "$WORK/seir_regression/TMP/RtmpvxGtqk/vscode-R" get_timestamp : function () globalenv_file : chr "$WORK/seir_regression/TMP/RtmpvxGtqk/vscode-R/globalenv.json" globalenv_lock_file : chr "$WORK/seir_regression/TMP/RtmpvxGtqk/vscode-R/globalenv.lock" homedir : chr "$HOME" new_plot : function () null_dev_id : Named int 2 null_dev_size : num [1:2] 10.1 10.1 path_to_uri : function (path) pid : int 190091 plot_file : chr "$WORK/seir_regression/TMP/RtmpvxGtqk/vscode-R/plot.png" plot_history_file : NULL plot_lock_file : chr "$WORK/seir_regression/TMP/RtmpvxGtqk/vscode-R/plot.lock" plot_updated : logi FALSE print.help_files_with_topic : function (h, ...) rebind : function (sym, value, ns) request : function (command, ...) request_browser : function (url, title, ..., viewer) request_file : chr "$HOME/.vscode-R/request.log" request_lock_file : chr "$HOME/.vscode-R/request.lock" rstudioapi_enabled : function () show_browser : function (url, title = url, ..., viewer = getOption("vsc.browser", "Active")) show_dataview : function (x, title, viewer = getOption("vsc.view", "Two")) show_globalenv : logi TRUE show_page_viewer : function (url, title = NULL, ..., viewer = getOption("vsc.page_viewer", "Active")) show_plot : logi TRUE show_view : logi TRUE show_viewer : function (url, title = NULL, ..., viewer = getOption("vsc.viewer", "Two")) show_webview : function (url, title, ..., viewer) tempdir : chr "$WORK/seir_regression/TMP/RtmpvxGtqk" unbox : function (x) update_globalenv : function (...) update_plot : function (...) wd : chr "$WORK/seir_regression" vscode status bar could show the pid of the attached session when the session is attached? When I'm on the main server, the pid is updated. When I'm on the node it isn't. Note that the remote ssh is to the server, not to the node. The files are served from the server. Since the directory structure on both is identical (other than /tmp), I can edit files and run things. But remote is connected to the initial server. believe this is the same issue I'm having in #552 .. like @michael-lachmann I too want to be able to use r session watcher on a compute node, not the login node that I initially Remote-SSH into. I added this line to my .Renviron to point the tmp dir to my $HOME directory: TMPDIR="/home/cmr46993/tmp" I think we're almost to a fix because if I use R: Create R Terminal now, the plots pngs go here: /home/cmr46993/tmp/RtmpNZ3ffC/ BUT if I launch a new Terminal manually, connect to a compute node, and launch radian, the plots pngs go here: /home/cmr46993/tmp/RtmpMmBQiz/ And then if I launch yet another Terminal manually, I see a third tmp directory 😆 I hope this is a simple fix @renkun-ken because this will be a game changer for developing code for me and apparently others too :) Actually, perhaps this issue is related to: https://github.com/microsoft/vscode-remote-release/issues/1722 There's currently no easy way to Remote SSH into an interactive session on a compute node.
gharchive/issue
2020-11-26T07:27:23
2025-04-01T06:37:04.700369
{ "authors": [ "michael-lachmann", "radlinsky", "renkun-ken" ], "repo": "Ikuyadeu/vscode-R", "url": "https://github.com/Ikuyadeu/vscode-R/issues/469", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
474585034
Lesson04 все по заданию сделал) но хочу еще реализовать чтобы даные приходили по апишке и плюс реализовать чтобы можно было добавлять свое собщение (если проверка будет позднее среды то думаю успею закомитить) тестуем изменяют ли пул реквесты свой номер после отмены
gharchive/pull-request
2019-07-30T13:27:39
2025-04-01T06:37:04.702918
{ "authors": [ "IlGoloviy" ], "repo": "IlGoloviy/React-GeekBrains", "url": "https://github.com/IlGoloviy/React-GeekBrains/pull/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1838298629
bug: updated Fastro weirdly hangs tests Newer Fastro versions weirdly hang tests when I run them. I don't know what exactly has changed, but that's no longer the case. Most probably it was something on Fastro's end, however it seems to be resolved now
gharchive/issue
2023-08-06T18:22:08
2025-04-01T06:37:04.711020
{ "authors": [ "Im-Beast" ], "repo": "Im-Beast/http_benchmarks", "url": "https://github.com/Im-Beast/http_benchmarks/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2451732721
Support linux older than glibc2.29 On linux lincimgui.so is built against glibc2.29. Log from a RHEL8 / Rocky8 which based on glibc2.28 : /lib64/libm.so.6: version `GLIBC_2.29' not found (required by [...]/bin/Debug/net8.0/runtimes/linux-x64/native/libcimgui.so) rhel8/rocky8 are long-term distributions, that's why it could be interesting to have another linux version in the nuget package. Is it an option? Thank you for considering the question. Colin. that's a https://github.com/cimgui/cimgui question. Please post it there otherwise correct me if I am wrong. cimgui is not built at all. You choose compiler. oh i see, @colaub can you help me figure out which linux version i should use such that I get glibc2.28 ref: https://github.com/ImGuiNET/ImGui.NET-nativebuild/blob/master/.github/workflows/build.yml#L29
gharchive/issue
2024-08-06T21:37:05
2025-04-01T06:37:04.715425
{ "authors": [ "colaub", "sonoro1234", "zaafar" ], "repo": "ImGuiNET/ImGui.NET", "url": "https://github.com/ImGuiNET/ImGui.NET/issues/491", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1915956206
Reorganize repository structure We can probably have a more intuitive and clean look if we reorganize as follows: Dockerfiles -> docker sqlQueries (I would also rename this to "queries"), Notebooks, SevenBridges, Terra -> workflows architectureDiagrams, pricingOptimization, sampleManifests -> docs Thinking more about this, maybe "resources" is a better name than "docs" for the content mentioned above.
gharchive/issue
2023-09-27T16:38:59
2025-04-01T06:37:04.781684
{ "authors": [ "fedorov" ], "repo": "ImagingDataCommons/Cloud-Resources-Workflows", "url": "https://github.com/ImagingDataCommons/Cloud-Resources-Workflows/issues/29", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
352532352
False negative for University of Illinois Libraries https://doi.org/10.5210/fm.v18i6.4340 is available from the publisher at http://journals.uic.edu/ojs/index.php/fm/article/view/4340/3687 , although not in PDF. API: { "best_oa_location": null, "data_standard": 2, "doi": "10.5210/fm.v18i6.4340", "doi_url": "https://doi.org/10.5210/fm.v18i6.4340", "genre": "journal-article", "is_oa": false, "journal_is_in_doaj": false, "journal_is_oa": false, "journal_issns": "1396-0466", "journal_name": "First Monday", "oa_locations": [], "published_date": "2013-06-03", "publisher": "University of Illinois Libraries", "title": "Assigning Wikipedia editing: Triangulation toward understanding university student engagement", "updated": "2018-06-17T07:18:14.930443", "year": 2013, "z_authors": [ { "family": "Roth", "given": "Amy" }, { "family": "Davis", "given": "Rochelle" }, { "family": "Carver", "given": "Brian" } ] } https://support.unpaywall.org/public/tickets/d3736be7c80f599113502fe94c78c682436e7630ab317fdc4911de3aa11d9097
gharchive/issue
2018-08-21T13:13:15
2025-04-01T06:37:04.791373
{ "authors": [ "nemobis", "richard-orr" ], "repo": "Impactstory/oadoi", "url": "https://github.com/Impactstory/oadoi/issues/95", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
340150602
Bug in com.impetus.kundera.classreading.Reader - getResourceIterator A Spring project was created with name that have spaces. When persistence.xml doesn't have any configured, it scans for entity classes in classpath. Since the path is URL encoded, spaces converts to %20 and the URL String doesn't seem to read the file. Suggestion: String urlString = url.toString(); can be changed to url.getFile() in Line 137 @Kinle url.getFile() will also return string with spaces converts to %20 I've also encountered this issue, it'd be nice if someone could look into resolving it.
gharchive/issue
2018-07-11T08:56:15
2025-04-01T06:37:04.796423
{ "authors": [ "Kinle", "ccarpenter04", "devender-yadav" ], "repo": "Impetus/Kundera", "url": "https://github.com/Impetus/Kundera/issues/1021", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1262737970
Close #30 | EmoT Dataset Please name your PR after the issue it closes. You can use the following line: "Closes #ISSUE-NUMBER" where you replace the ISSUE-NUMBER with the one corresponding to your dataset. Checkbox [x] Confirm that this PR is linked to the dataset issue. [x] Create the dataloader script nusantara/nusa_datasets/my_dataset/my_dataset.py (please use only lowercase and underscore for dataset naming). [x] Provide values for the _CITATION, _DATASETNAME, _DESCRIPTION, _HOMEPAGE, _LICENSE, _URLs, _SUPPORTED_TASKS, _SOURCE_VERSION, and _NUSANTARA_VERSION variables. [x] Implement _info(), _split_generators() and _generate_examples() in dataloader script. [x] Make sure that the BUILDER_CONFIGS class attribute is a list with at least one NusantaraConfig for the source schema and one for a nusantara schema. [ ] Confirm dataloader script works with datasets.load_dataset function. [ ] Confirm that your dataloader script passes the test suite run with python -m tests.test_nusantara --path=nusantara/nusa_datasets/my_dataset/my_dataset.py. [ ] If my dataset is local, I have provided an output of the unit-tests in the PR (please copy paste). This is OPTIONAL for public datasets, as we can test these without access to the data files. Tests done: data = load_dataset("nusantara/nusa_datasets/emot/emot.py", name="emot_source") python -m tests.test_nusantara nusantara/nusa_datasets/emot/emot.py make check_file=nusantara/nusa_datasets/emot/emot.py Hi @gentaiscool, thanks for the PR! I found two issues on the emot dataset: The first row of the data for each subset is the file header as shown below: >>> x['train'][0] {'index': '0', 'sentence': 'label', 'label': 'tweet'} >>> x['validation'][0] {'index': '0', 'sentence': 'label', 'label': 'tweet'} >>> x['test'][0] {'index': '0', 'sentence': 'label', 'label': 'tweet'} Could you please remove the first line? In addition, since source means following the data source format, it will be better if we change sentence key into tweet. The text and label are swapped for both source and nusantara schema (the text content should be the label, and the label content should be the text). >>> x['train'][3] {'id': '3', 'text': 'fear', 'labels': ['yaudah kalo emang belum berani potong rambut pendek ya nanti" aja kalo emang udah yakin dan bisa nyaman Selamat beristirahat, jangan lupa berdoa Tidur yg nyenyak & mimpi yg indah Good night cit ']} >>> x['validation'][3] {'id': '3', 'text': 'anger', 'labels': ['[USERNAME] [USERNAME] [USERNAME] [USERNAME] [USERNAME] [USERNAME] [USERNAME] [USERNAME] [USERNAME] [USERNAME] Koq Ngabalin di sebut tokoh, gak salah tuh, hidungnya tambah gede ntar, toko sorban kali ya.']} >>> x['test'][3] {'id': '3', 'text': 'fear', 'labels': ['[USERNAME] [USERNAME] [USERNAME] Gw lebih khawatir lg kalo Ahok jd jurkamnya Jokowi bro. Nemo movement malah berbalik jd ikon punya Jokowi. Persis kejadian 212 yg jd panggung Jkw']} Could you fix this problem? Thanks! @SamuelCahyawijaya I fixed the swap issue. >>> data["test"][0] {'index': '0', 'tweet': 'Pixy ini kok lama-lama gemesim yaaaa. Setelah jatuh cinta sama lip cream nya, kayak nya bakal jatih cinta sama yang lain. Huft [URL]', 'label': 'love'} >>> data["test"][1] {'index': '1', 'tweet': 'Penyakit hepatitis B 100x lebih infeksius dari HIV/AIDS Duhh serem! Dan belum ada obatnya.', 'label': 'fear'} >>> data["train"][0] {'index': '0', 'tweet': 'Ini adalah hal yang paling membahagiakan saat biasku foto bersama ELF #ReturnOfTheLittlePrince #HappyHeeChulDay', 'label': 'happy'} Hello @gentaiscool, the unit test returned an error. Please kindly check the log I attach below. INFO:__main__:args: Namespace(path='nusantara/nusa_datasets/emot/emot.py', schema=None, subset_id=None, data_dir=None, use_auth_token=None) INFO:__main__:self.PATH: nusantara/nusa_datasets/emot/emot.py INFO:__main__:self.SUBSET_ID: emot INFO:__main__:self.SCHEMA: None INFO:__main__:self.DATA_DIR: None INFO:__main__:Checking for _SUPPORTED_TASKS ... INFO:__main__:Found _SUPPORTED_TASKS=[<Tasks.EMOTION_CLASSIFICATION: 'EC'>] INFO:__main__:_SUPPORTED_TASKS implies _MAPPED_SCHEMAS={'TEXT'} INFO:__main__:schemas_to_check: {'TEXT'} INFO:__main__:Checking load_dataset with config name emot_source E ====================================================================== ERROR: runTest (__main__.TestDataLoader) Run all tests that check: ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/holy/projects/nusantara-datasets/tests/test_nusantara.py", line 185, in setUp self.dataset_source = datasets.load_dataset( File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/load.py", line 1687, in load_dataset builder_instance.download_and_prepare( File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/builder.py", line 605, in download_and_prepare self._download_and_prepare( File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/builder.py", line 694, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/builder.py", line 1095, in _prepare_split example = self.info.features.encode_example(record) File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/features/features.py", line 1296, in encode_example return encode_nested_example(self, example) File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/features/features.py", line 973, in encode_nested_example return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)} File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/features/features.py", line 973, in <dictcomp> return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)} File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 207, in zip_dict yield key, tuple(d[key] for d in dicts) File "/home/holy/anaconda3/envs/nusantara/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 207, in <genexpr> yield key, tuple(d[key] for d in dicts) KeyError: 'sentence' ---------------------------------------------------------------------- Ran 1 test in 0.354s FAILED (errors=1) @holylovenia I have fixed it
gharchive/pull-request
2022-06-07T05:21:00
2025-04-01T06:37:04.869777
{ "authors": [ "SamuelCahyawijaya", "gentaiscool", "holylovenia" ], "repo": "IndoNLP/nusa-datasets", "url": "https://github.com/IndoNLP/nusa-datasets/pull/65", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1143723378
Add E2E test to Digital Twin Repo This commit containts: Change of registry variables in Scorpio deployments and Digital Twin created deployment to allow easier usage of local registry (to avoid pulling from private docker repo) Bats/Detik based Kubernetes tests to check k8s resource status Tests check operators and horizontal platform status Automated the ingress setup for keycloak.local, alerta.local, ngsild.local urls and respective updates in README.md Automated K3s setup with two nodes and local registry Linting of bats test files Take mangement of postgres operator out of olm (since version on operatorhub is too old) Upgrade of Postgres-operator to 1.7.1 Kubernetes version whcih is tested is 1.22 Closes #118 @wagmarcel is this the latest run for the CI? Seems to be failing https://github.com/wagmarcel/DigitalTwin/actions/workflows/k8s-tests.yaml @wagmarcel is this the latest run for the CI? Seems to be failing https://github.com/wagmarcel/DigitalTwin/actions/workflows/k8s-tests.yaml No. That was my tests runs. The test of this PR is already listed in this PR - see the 2nd check:. https://github.com/IndustryFusion/DigitalTwin/runs/5253256772?check_suite_focus=true My bad... I clicked on the branch at the top and ended up on the fork.
gharchive/pull-request
2022-02-18T20:31:24
2025-04-01T06:37:04.876391
{ "authors": [ "sysarcher", "wagmarcel" ], "repo": "IndustryFusion/DigitalTwin", "url": "https://github.com/IndustryFusion/DigitalTwin/pull/119", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2347217375
Document that the module's files option does env substitution Currently, neither the readme nor the option docs mention that services.minecraft-servers.servers.<name>.files performs environment substitution. I just got bitten by this when working with binaries that were getting mysteriously corrupted, and the module mutating them was not something that I expected. The ideal case scenario is IMHO that this is disabled by default and opt-in on each file, or at least opt-able out of, but it should really be documented. I can make a PR for either if it helps. :slightly_smiling_face: It's technically documented under the environmentFile option, however I do agree that it should be documented under files as well. Making it toggleable should be pretty straight forward too, but would likely need some changes to the mkFiles script. Feel free to open PRs if you have the time :) hi, sorry for not replying sooner. I kept thinking I'll find time for this some when, but I'm starting to see I won't be able to anytime soon. on top of that, I'm somewhat considering moving my infra over from systemd to nomad. while NixOS modules are superior in many ways, I'm starting to feel the need for a more flexible orchestrator, and would also prefer to spend less time worrying about every other NixOS service being completely unsandboxed. :) I just found this without seeing this bug report. This issue of doing env substitution blindly makes the files option unusable for setting up mods. And since symlinks puts a symlink directly to the Nix Store, it doesn't look like there's a proper way to declaratively install mods, at least not the way the README suggests. The symlinks option is the proper way to declaratively install mods. Mods are binaries, so symlinking them to the nix store is perfectly fine and generally preferable, as it reduces on the space cost of copying files. The symlinks option is the proper way to declaratively install mods. Mods are binaries, so symlinking them to the nix store is perfectly fine and generally preferable, as it reduces on the space cost of copying files. The issue with this is that it's a symlink to the Nix Store and Paper (And probably other server codebases.) likes to create the plugin configs in the same directory as the jars... which it can't do since the Nix Store is read-only. This causes Paper to bail with a "read-only file system" because it can't write to the plugins directory symlink makes. Unless there's some other way to get this to work? Is there a setting I overlooked where Paper can create plugin configs outside the plugins directory which is a symlink to the Nix Store? <Tired opinion (not because of you but because I've dealt with this before)> As far as I am concerned, this is because Paper is bad. Paper, and other plugin-centered projects, are the only ecosystem I've seen that feels like it has any dominion over the folder dedicated to putting binaries from the user. No other modloader does this. The fact that a mod port of a plugin did this (LuckPerms) frustrated me to the point where I just found an alternative instead. </Tired opinion> The proper way to work around this is to make symlinks individually in the manner you did with the files option, as opposed to symlinking directly to the mods folder. (I.e., "mods/mod.jar" = ...;) There isn't a better option because Paper tries to write to a directory it has no business writing to. You can, at minimum, do this a bit easier with mapAttrs. Hmm, that's not a perspective I thought of. I honestly don't think it's a problem, but that's why it's opinion, I suppose. What server do you recommend? I exclusively use mod loaders and not plugins, so the use case is a bit different. For my part, I use Fabric and Quilt on all of my servers, including vanilla-compatible ones. Forge is also a possibility for highly modded ones, but since that isn't currently packaged (see #15), I just stick with the textiles. Will be fixed by #116
gharchive/issue
2024-06-11T20:12:01
2025-04-01T06:37:04.897511
{ "authors": [ "Infinidoge", "Misterio77", "YaroKasear", "frantisekhanzlikbl" ], "repo": "Infinidoge/nix-minecraft", "url": "https://github.com/Infinidoge/nix-minecraft/issues/70", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
627689997
Datasett: begrep og informasjonsmodeller Legge inn en setning i Kommentar-feltet for Datasett: begrep, om at denne brukes kun når informasjonsmodell ikke brukes, og henvise til modelldcat-ap-no, eller at det bør være samsvar mellom disse når begge brukes. Fiksa (#258)
gharchive/issue
2020-05-30T09:36:50
2025-04-01T06:37:04.937482
{ "authors": [ "jimjyang" ], "repo": "Informasjonsforvaltning/dcat-ap-no", "url": "https://github.com/Informasjonsforvaltning/dcat-ap-no/issues/257", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2455365666
feat: frontpage/data-hunter sync Resolves #130 #132 #138 #158 #159 https://github.com/Informasjonsforvaltning/fdk-team-private/issues/94 https://github.com/Informasjonsforvaltning/fdk-team-private/issues/87 Denne er allerede merget av https://github.com/Informasjonsforvaltning/fdk-frontend/pull/164
gharchive/pull-request
2024-08-08T09:50:18
2025-04-01T06:37:04.941220
{ "authors": [ "Lillebo", "jeffreiffers" ], "repo": "Informasjonsforvaltning/fdk-frontend", "url": "https://github.com/Informasjonsforvaltning/fdk-frontend/pull/153", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2131871547
[L7J7] Pipeline reproduction (SPM - deriv) Softwares SPM12 Input data derivatives (fMRIprep) Additional context List of tasks Please tick the boxes below once the corresponding task is finished. :+1: [ ] :ok_hand: A maintainer of the project approved the issue, by assigning a :checkered_flag:status: ready for dev label to it. [ ] :deciduous_tree: Create a branch on your fork to start the reproduction. [ ] :sunrise: Create a file team_{team_id}.py inside the narps_open/pipelines/ directory. You can use a file inside narps_open/pipelines/templates as a template if needed. [ ] :inbox_tray: Create a pull request as soon as you completed the previous task. [ ] :brain: Write the code for the pipeline, using Nipype and the file architecture described in docs/pipelines.md. [ ] :blue_book: Make sure your code is documented enough. [ ] :snake: Make sure your code is explicit and conforms with PEP8. [ ] :microscope: Create tests for your pipeline. You can use files in tests/pipelines/test_team_* as examples. [ ] :microscope: Make sure your code passes all the tests you created (see docs/testing.md). @icorouge: I am adding you as assignee on this issue (just for the time of the hackathon) so that we can more easily see which pipelines are open for new contributions in https://github.com/orgs/Inria-Empenn/projects/1/views/1 Correlation results with 108 subjects : [0.93, 0.94, 0.93, 0.94, 0.94, 0.93, 0.94, 0.93, 0.95] with commit ba2b3dd
gharchive/issue
2024-02-13T09:31:01
2025-04-01T06:37:04.984890
{ "authors": [ "bclenet", "cmaumet" ], "repo": "Inria-Empenn/narps_open_pipelines", "url": "https://github.com/Inria-Empenn/narps_open_pipelines/issues/166", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1459883001
[ENH] add model U26C [x] add bids stats model for U26C Had to make some choices on how to define parametric regressors for SPM. This seems to be handled differently for SPM in nipype: https://nipype.readthedocs.io/en/latest/api/generated/nipype.algorithms.modelgen.html#module-nipype.algorithms.modelgen So I did put them in the software options. I will try to do the same for the model from our teams to see if I can make this work without turning it into a headache. Not sure if this is helpful here (and Remi you must know about this) but keeping a note so that I remember too... Just heard about NARPS BIDS-stats-models example: https://bids-standard.github.io/model-zoo/exhibits/narps/model-narps_smdl.html from Alejandro. Yup Rotem and I created this one and I think I took it as a first step to start building those ones: https://github.com/Inria-Empenn/narps_open_pipelines/tree/main/narps_open/models
gharchive/pull-request
2022-11-22T13:33:29
2025-04-01T06:37:04.988506
{ "authors": [ "Remi-Gau", "cmaumet" ], "repo": "Inria-Empenn/narps_open_pipelines", "url": "https://github.com/Inria-Empenn/narps_open_pipelines/pull/21", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2572483061
Survey's last page message don't fit displayed Timestamp after Platine extraction Env PROD Collecter Survey Any Bug Descirption quand on consulte le questionnaire d'une UE répondante, la date d'envoi de son questionnaire varie entre le moment où elle vient de l'envoyer (on voit alors la vraie date d'expédition par l'entreprise) et le moment où il y a eu l'extraction Platine (ond voit alors la date d'extraction depuis Platine, vers 6h) comme le montre les copies d’écran ci-dessous. ==> une fois l'extraction faite, on ne voit plus à quelle heure l'UE a expédié son questionnaire, d'autant qu'on n'a pas accès à la preuve de dépôt quand on clique dessus Screenshot Comportement attendu Avoir une heure qui correspond à l'état Solutions ~Changer la stratégie du batch pour ne plus update l'état du questionnaire lors de l'extraction~ https://github.com/InseeFr/stromae-dsfr/issues/133#issuecomment-2399272922 Changer le message suivant l'état du questionnaire @AnneHuSKa Il faudrait déterminer l'autre message que l'on souhaite afficher dès que le batch d'extraction Platine soit passé. Je ne sais pas trop à quel point il faut être proche de la "réalité technique" ou non. Est-ce que tu as une idée ? @JulienCarmona j'ai des idées pas forcément top : tu veux poser la question à Collecter ? Ce sont eux qui seront les plus concernés non ? Vu avec Collecter, dans un premier temps on va afficher: Si VALIDATED on change rien Si TOEXTRACT ou EXTRACTED: retirer la date, le message devient: Vos réponses ont bien été envoyées. Testé en visu recette avec un questionnaire tiré de PE dans l'état VALIDATED { "stateData": { "state": "VALIDATED", "date": 1730217107584, "currentPage": "endPage" } } Avec les états TOEXTRACT et EXTRACTED { "stateData": { "state": "TOEXTRACT", "date": 1730217107584, "currentPage": "endPage" } }
gharchive/issue
2024-10-08T08:31:56
2025-04-01T06:37:04.999752
{ "authors": [ "AnneHuSKa", "JulienCarmona", "laurentC35" ], "repo": "InseeFr/stromae-dsfr", "url": "https://github.com/InseeFr/stromae-dsfr/issues/133", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1508013959
BUG: Linux internal build failure is silent An issue was observed in the HASI repository where Linux wheel builds failed but jobs were marked as having completed successfully. MacOS and Windows wheels failed and were correctly marked as failed. https://github.com/KitwareMedical/HASI/actions/runs/3750380351/jobs/6370024283 Steps to Reproduce The HASI module depends on the ITKBoneEnhancement project. If ITKBoneEnhancement build outputs are not provided then HASI wheels are expected to fail. Fetch ITK build artifacts (without ITKBoneEnhancement) Try building HASI with ITKPythonPackage build scripts Observed behavior The HASI build begins inside the appropriate docker container and fails at the config stage. However, the failure does not seem to propagate from the container back to the host. A similar issue was encountered in https://github.com/InsightSoftwareConsortium/ITKRemoteModuleBuildTestPackageAction/issues/52 where a package failed to install inside the docker image, resulting in a build failure that is not caught by the GitHub Actions runner and subsequently reported as a success. A minimum path to address this issue is to validate that the expected wheel / number of wheels is present in dist/ after the build process completes and before artifact upload is attempted. Workaround introduced in https://github.com/InsightSoftwareConsortium/ITKRemoteModuleBuildTestPackageAction/pull/53 so that a job fails if expected wheel output is not produced. Closing for now with the potential to reopen if this fix proves insufficient for tracing errors.
gharchive/issue
2022-12-22T14:44:36
2025-04-01T06:37:05.016178
{ "authors": [ "tbirdso" ], "repo": "InsightSoftwareConsortium/ITKRemoteModuleBuildTestPackageAction", "url": "https://github.com/InsightSoftwareConsortium/ITKRemoteModuleBuildTestPackageAction/issues/38", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
213316165
Using Instasharp without a session I am creating an Umbraco package, to do this I cannot use any ActionResult or a Session could you help advise an alternative so that I can have a 'Stateless' controller. Thanks The library is not tied to Sessions. You can store the data in any place.
gharchive/issue
2017-03-10T11:25:26
2025-04-01T06:37:05.023122
{ "authors": [ "JPawsey45", "fujiy" ], "repo": "InstaSharp/InstaSharp", "url": "https://github.com/InstaSharp/InstaSharp/issues/132", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
199135754
Question about an assert in IGListAdapter.m This is a little strange. #if DEBUG for (id object in objects) { IGAssert([object isEqual:object], @"Object instance %@ not equal to itself. This will break infra map tables.", object); } #endif When will [object isEqual:object] return NO? object won't be nil in the for-loop. Could you please attach an example for this? @PhilCai1993 before ab890fc6070f170a2db5a383a6296e62dcf75678 when we used -isEqual: to test diffable objects, we had a user bug where someone's equality implementation accidentally returned NO even when the object was the same pointer. The error was really difficult to track down, so we added this assert so it wont even happen again. However, since we changed the equality method to -isEqualToDiffableObject:, this assert should be updated. But I'm still confused... How could [obj isEqualToDiffableObject:obj ] return NO? -(BOOL)isEqualToDiffableObject:(id)object { // it returns NO when object === self, how could it happen? } @PhilCai1993 Purely by developer error. Non-obvious example: @interface MyClass: NSObject @property NSString *text; @end @implementation MyClass - (BOOL)isEqualToDiffableObject:(id)object { if (![object isKindOfClass:[MyClass class]]) { return NO; } return [self.text isEqualToString:[object text]]; } @end Then you create and compare: MyClass *left = [MyClass new]; NSLog(@"%zi", [left isEqualToDiffableObject:left]); // prints "0" (aka NO) That's b/c passing a message to nil returns a 0 value (NO in this case). Obviously this is fixed with a self == object check, but people make mistakes. The assert just makes a tricky-to-catch mistake impossible while debugging. Thanks a lot!
gharchive/issue
2017-01-06T06:49:11
2025-04-01T06:37:05.032116
{ "authors": [ "PhilCai1993", "rnystrom" ], "repo": "Instagram/IGListKit", "url": "https://github.com/Instagram/IGListKit/issues/387", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2087565795
About Identity Similarity... In technical report, Fig. 6, "Jackie Chan" does not looks like reference image especially his nose. I supposed antelopev2 model should able to extract Jackie Chan very well as his images should be in Glint360k training data. Is this the limitation of face id encoder? Also, the picture quality seems over saturated. Is it because of SDXL base model or your prompt? It should be related to the weight scale. We don't tune the parameters carefully. Anyway, talk is easy, we will show you the code. In technical report, Fig. 6, "Jackie Chan" does not looks like reference image especially his nose. I supposed antelopev2 model should able to extract Jackie Chan very well as his images should be in Glint360k training data. Is this the limitation of face id encoder? Also, the picture quality seems over saturated. Is it because of SDXL base model or your prompt? It may be a problem with prompt. I changed the prompt and base model to generate a new image. At the same time, our model is constantly being optimized. @wangqixun which model were using here? base model = https://civitai.com/models/43977?modelVersionId=227916 prompt = "cinema 4d render, high contrast, vibrant and saturated, sico style, dark and moody close-up shot of a handsome Saint-Pierrais man with a tired expression, (renaissance theme:1.1), colorful northern warrior, (glowing eyes:1.05), dynamic pose, hooded robe, surrounded by magical glow, floating ice shards, snow crystals, cold, windy background, frozen natural landscape in background cinematic atmosphere, highly detailed, sharp focus, intricate design, 3d, unreal engine, octane render, CG best quality, highres, photorealistic, dramatic lighting, artstation, concept art, cinematic, epic Steven Spielberg movie still, sharp focus, smoke, sparks, art by pascal blanche and greg rutkowski and repin, trending on artstation, hyperrealism painting, detailed character design, matte painting, 4k resolution" neg prompt = "asian, (worst quality, low quality, thumbnail:1.4), signature, artist name, web address, cropped, jpeg artifacts, watermark, username, collage, grid, nude, topless, nsfw, naked, nipples" The style is not very stable. Generate the image again. @wangqixun which model were using here? base model = https://civitai.com/models/43977?modelVersionId=227916 prompt = "cinema 4d render, high contrast, vibrant and saturated, sico style, dark and moody close-up shot of a handsome Saint-Pierrais man with a tired expression, (renaissance theme:1.1), colorful northern warrior, (glowing eyes:1.05), dynamic pose, hooded robe, surrounded by magical glow, floating ice shards, snow crystals, cold, windy background, frozen natural landscape in background cinematic atmosphere, highly detailed, sharp focus, intricate design, 3d, unreal engine, octane render, CG best quality, highres, photorealistic, dramatic lighting, artstation, concept art, cinematic, epic Steven Spielberg movie still, sharp focus, smoke, sparks, art by pascal blanche and greg rutkowski and repin, trending on artstation, hyperrealism painting, detailed character design, matte painting, 4k resolution" neg prompt = "asian, (worst quality, low quality, thumbnail:1.4), signature, artist name, web address, cropped, jpeg artifacts, watermark, username, collage, grid, nude, topless, nsfw, naked, nipples" The style is not very stable. Generate the image again. Correct it base model = https://civitai.com/models/84040?modelVersionId=196039
gharchive/issue
2024-01-18T05:58:27
2025-04-01T06:37:05.041533
{ "authors": [ "haofanwang", "renderless", "wangqixun" ], "repo": "InstantID/InstantID", "url": "https://github.com/InstantID/InstantID/issues/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
339056653
en la bandeja de proyectos POSTULACIÓN REVISAR la sumatoría de gastos de los productos asociados al proyecto, verificar los estados que está leyendo. Este es el código con problemas: Saludos, Actualmente el proyecto está con solicitud de cambio en construcción, el monto original del proyecto es de $37.683.300 y en la solicitud de cambio hasta el momento se han utilizado $34.433.310, falta que se completen $3.249.990. Dado que no se ha utilizado todo el monto del proyecto, arroja el error cuando se intenta enviar la solicitud de cambio a revisión, quedando en evidencia en la imagen siguiente: @danteghirardelli estimado; Necesito que realice lo siguiente: Crear un proyecto y llevarlo hasta la aprobación. Adjuntar el reporte. Crear una SC, elimine un gasto. Descargue el reporte desde postulación. El monto del proyecto siempre debería ser el mismo, Se creará nuevo Proyecto entidad ADO para posteriormente realizar Solicitud de Cambio Gastos en Bienes y Raíces @felipedonoso @pabloespinoza Se continúa con la creación de Proyecto. @felipedonoso @pabloespinoza Se realizará Solicitud Cambio Proyecto 1800042030 Generar Solicitud de Cambio Se modifican los Gastos En Reporte se mantiene correctamente el Monto $11.000.000 original Se modificó el Gasto sin aprobar la Solicitud de Cambio. Reporte se mantiene con monto Real y original. Prueba solicitada se da por aprobada. @felipedonoso @pabloespinoza Gracias Pablo por la ayuda. Se crea nuevo Proyecto COCH Proyecto 1800042031 Postulado @felipedonoso @pabloespinoza @felipedonoso @pabloespinoza Continuar con Proyecto 1800042031 Evaluar Proyecto Proyecto Seleccionado Se creará Solicitud de Cambio para Proyecto 1800042031 Generar Solicitud de Cambio Enviar Solicitud de Cambio Consultar por mensaje Se ingresa Gasto en Bienes y Servicios que supera El Monto Total del Proyecto originalmente Aprobado $10 millones. Se modifica Total del Proyecto por uno válido. @felipedonoso @pabloespinoza Se continúa con la Solicitud de Cambio. Postulación: http://10.15.1.51/AltoRendimientoQA Usuario: 15314968 Contraseña: federacion Se sigue con la Solicitud de Cambio para proyecto 1800042031 Eliminar Solicitud de Cambio @felipedonoso @pabloespinoza ¿Por qué cambio el Monto? Costo total del producto: 10.000.000 Agregar un gasto en bienes y servicios por $6.000.000, buscando como fin validar la regla del 5%. El sistema valido dicha prueba, indicando que el monto del producto supera los 10.000.000. Se modifica el gasto por 5.000.000, se vuelve a validar y aparece el mensaje de la regla del 5%. Se Elimina la SC, y el sistema tiene un error al indicar el monto del producto que paso de 10.000.000 a 15.000.000. @felipedonoso @pabloespinoza @felipedonoso @pabloespinoza Crear nuevo Proyecto COCH "Tubito" @felipedonoso @pabloespinoza Revisemos Proyecto 1800042032 @felipedonoso @pabloespinoza Se realiza Solicitud de Cambio para Proyecto 1800042033 Crear nuevo Proyecto COCH y posteriormente aprobar una Solicitud cambio. @felipedonoso @pabloespinoza Se continua con la creación de Proyecto "Usted Sabe" Proyecto "Usted Sabe" Geolocalización @felipedonoso @pabloespinoza Se continua con el proyecto. Retomando el error detectado por @danteghirardelli 👍 Costo total del producto: 10.000.000 Agregar un gasto en bienes y servicios por $6.000.000, buscando como fin validar la regla del 5%. El sistema valido dicha prueba, indicando que el monto del producto supera los 10.000.000. Se modifica el gasto por 5.000.000, se vuelve a validar y aparece el mensaje de la regla del 5%. Se Elimina la SC, y el sistema tiene un error al indicar el monto del producto que paso de 10.000.000 a 15.000.000. 1.- Se bajará le gasto en personal para poder obtener los 6.000.000 para bienes y servicios. De 10.000.000 a 6.000.000: 2.- Modificando el monto a 5 millones: 3.- Se elimina la SC: El error a sido corregido se elimina la SC y el proyecto vuelve al monto original: @felipedonoso @pabloespinoza Verificar Proyecto Evaluar Proyecto Ingresado @felipedonoso @pabloespinoza Revisando Proyecto "Usted Sabe" @felipedonoso @pabloespinoza Se continua con Proyecto "Usted Sabe" Evaluar Proyecto Ingresado Postular Proyecto 1800042036 Postulado Evaluar Pre Seleccionado Evaluar Proyecto Seleccionado Evaluar Proyecto Aprobado Reporte Antes SC Proyecto - 1800042036_AntesSC.pdf Después se continuará con este Proyecto 1800042036
gharchive/issue
2018-07-06T20:05:53
2025-04-01T06:37:05.109760
{ "authors": [ "danteghirardelli", "felipedonoso", "pabloespinoza" ], "repo": "InstitutoNacionalDeDeportes/PostulacionAltoRendimiento2017", "url": "https://github.com/InstitutoNacionalDeDeportes/PostulacionAltoRendimiento2017/issues/87", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
621639706
WEBAPP-562: Test content not found error poi This pull request belongs to an issue on our bugtracker. You can find it there by looking for an issue with the key which is mentioned in the title of this pull request. It starts with the keyword WEBAPP. The changes in FailureSwitcher are just what I think fits better to our mindset to prefer testing functunality. If you don't like it feel free to object :) Examples for non-snapshot tests: it('should show correct text', () => { const wrapper = shallow(<MyComponent />); expect(wrapper.text().includes('my text')).toBe(true); }); const wrapper = shallow(<div><button className='btn btn-primary'>OK</button></div>); const button = wrapper.find('.btn'); expect(button.text()).to.be.eql('OK');
gharchive/pull-request
2020-05-20T10:13:47
2025-04-01T06:37:05.112583
{ "authors": [ "Taggotty", "maxammann" ], "repo": "Integreat/integreat-webapp", "url": "https://github.com/Integreat/integreat-webapp/pull/336", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
316116477
Mismatch between the Self Link in apiserver - K8s plumbing working group documentation and the implementation apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networks.kubernetes.cni.cncf.io spec: group: kubernetes.cni.cncf.io version: v1 scope: Namespaced names: plural: networks singular: network kind: Network shortNames: - net Implementation: https://github.com/Intel-Corp/multus-cni/blob/122dbfb345ae4a13fd1d592723a0ba1603278dd9/multus/multus.go#L386 @s1061123 @dougbtv Creating a patch for it. #58 Fixed in the PR
gharchive/issue
2018-04-20T03:38:25
2025-04-01T06:37:05.114385
{ "authors": [ "rkamudhan" ], "repo": "Intel-Corp/multus-cni", "url": "https://github.com/Intel-Corp/multus-cni/issues/56", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
641957789
[hevce] Enable QP modulation Pass QP mod parameters to driver @saosipov , @lakulako , @vilichev , please review Need to wait while typo would be fixed in libva interface. Now it contain filed hierachical_flag but it should be hieraRchical_flag. Relates to this https://github.com/intel/libva/issues/429
gharchive/pull-request
2020-06-19T13:10:36
2025-04-01T06:37:05.116225
{ "authors": [ "alexelizarov", "dmitryermilov" ], "repo": "Intel-Media-SDK/MediaSDK", "url": "https://github.com/Intel-Media-SDK/MediaSDK/pull/2178", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1628414799
RE1 closes windowed or full screen at beginning It closes when you first enter the mansion or when you go into the dining room. Never had RE1 issues before this update the seed is R710-XVX3-XNE7603LCF020149K49K0WZZZZZZZZZZ00Z I was playing as Jill. Does it happen on vanilla as well or just when you load the biorand mod? Does it happen if you select some NPCs and BGMs, or disable those two things? it doesn't happen on vanilla. It happens with and without the NPCs and BGMs disabled Does it happen if use an older version of biorand? No everything is fine with an older version of BioRand. Unfortunately I can't reproduce it, so I am not sure why it is crashing for you. ok the 3.01 update fixed it brother so I'm gonna close this
gharchive/issue
2023-03-16T22:52:17
2025-04-01T06:37:05.118933
{ "authors": [ "IntelOrca", "Shfan3" ], "repo": "IntelOrca/biorand", "url": "https://github.com/IntelOrca/biorand/issues/301", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1520090171
The accuracy of the distance between 2 points is high in the center but is low on the edge, how can I improve it? Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view): Consider checking out SDK examples. Have you looked in our documentations? Is you question a frequently asked one? Try searching our GitHub Issues (open and closed) for a similar issue. All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :) Required Info Camera Model D435i Operating System & Version Win10 Language python Issue Description Hi, sorry to bother you. I have read many issues but don't find the proper solution. I'm trying to measure the diameters of rebars in the RGB photos taken by realsense D435i. There is the problem that the accuracy of diameters is high for the rebars in the center but is low for the rebars on the dege As shown in the following figure(the reality is on the up and left, the calculation is on the down and right using the Euclidean distance). I'm thinking is there any key point ignored. I don't know whether is the problem that the pointcloud gained by python is not aligned correctly, because the rebars seems not to lie on the highest plcace and have a bias. This is the code I transport the points from pixels to the camera coordinates(using the method of point cloud): import pyrealsense2 as rs import numpy as np import cv2 ''' camera setting ''' pipeline = rs.pipeline() config = rs.config() config.enable_stream(rs.stream.depth, 1280,720, rs.format.z16, 30) config.enable_stream(rs.stream.color, 1920,1080, rs.format.bgr8, 30) profile = pipeline.start(config) pc = rs.pointcloud() points = rs.points() #Define filters #Decimation: decimation = rs.decimation_filter() #Depth to disparity depth_to_disparity = rs.disparity_transform(True) disparity_to_depth = rs.disparity_transform(False) #Spatial: spatial = rs.spatial_filter() spatial.set_option(rs.option.holes_fill, 0) # between 0 and 5 def = 0 spatial.set_option(rs.option.filter_magnitude, 2) # between 1 and 5 def=2 spatial.set_option(rs.option.filter_smooth_alpha, 0.5) # between 0.25 and 1 def=0.5 spatial.set_option(rs.option.filter_smooth_delta, 20) # between 1 and 50 def=20 #Temporal: temporal = rs.temporal_filter() temporal.set_option(rs.option.filter_smooth_alpha, 0.4) temporal.set_option(rs.option.filter_smooth_delta, 20) colorizer = rs.colorizer() #Get info about depth scaling of the device depth_sensor = profile.get_device().first_depth_sensor() depth_scale = depth_sensor.get_depth_scale() print("Depth Scale is: " , depth_scale) #align to color align_to = rs.stream.color align = rs.align(align_to) def get_aligned_images(): frames = pipeline.wait_for_frames() #Apply filters pc_filtered = decimation.process(frames) pc_filtered = depth_to_disparity.process(pc_filtered) pc_filtered = spatial.process(pc_filtered) pc_filtered = temporal.process(pc_filtered) pc_filtered = disparity_to_depth.process(pc_filtered).as_frameset() #Align the depth frame to color frame aligned_frames = align.process(pc_filtered) aligned_depth_frame = aligned_frames.get_depth_frame() aligned_color_frame = aligned_frames.get_color_frame() img_color = np.asanyarray(aligned_color_frame.get_data()) img_depth = np.asanyarray(aligned_depth_frame.get_data()) aligned_depth_color_frame = colorizer.colorize(aligned_depth_frame) img_depth_mapped = np.asanyarray(aligned_depth_color_frame.get_data()) return img_color, img_depth, img_depth_mapped, aligned_color_frame, aligned_depth_frame, aligned_frames def get_3d_camera_coordinate(depth_pixel, aligned_color_frame, aligned_depth_frame, aligned_frames): x = np.round(depth_pixel[1]).astype(np.int64) y = np.round(depth_pixel[0]).astype(np.int64) #pointcloud pc.map_to(aligned_color_frame) points = pc.calculate(aligned_depth_frame) points.export_to_ply("../frame_test.ply", aligned_color_frame) vtx = np.asanyarray(points.get_vertices()) #print('vtx_before_reshape: ', vtx.shape) vtx = np.reshape(vtx, (1080, 1920, -1)) #print('vtx_after_reshape: ', vtx.shape) camera_coordinate = vtx[y][x][0] #print ('camera_coordinate: ',camera_coordinate) dis = camera_coordinate[2] return dis, camera_coordinate Hi @weicuiting As the Decimation filter will be reducing the resolution of the depth image, are measurements more accurate if you comment out the Decimation filter, please? It's really appreciate for the quick apply. I just tried to comment out the Decimation filter, and the trend did not change(low accuracy onthe edge) even worse. If the measurements are accurate at the center but become increasingly inaccurate when moving towards the edge of the image, this is usually because of an inaccuracy in the depth-color alignment. You seem to have applied alignment correctly in your script though and placed the align process after the post-processing filter list as Intel recommend. Does accuracy improve if you align color to depth instead of depth to color by changing the align_to instruction from 'color' to 'depth'. align_to = rs.stream.depth As you are using pc.calculate to generate the point cloud and map_to to map color onto the depth points, it may not actually be necessary to use align_to to align depth to color. Thank you for the responce! I tried 'align_to = rs.stream.depth', the trend still didn't changed. More important, it is not suitbale for my program to use 'align_to = rs.stream.depth' . This would result in the decreasing of RGB resolution and the black edges which affects the segmentation accuracy of rebars in the RGB photos(as the green masks shown in the following pictures). Are you able to comment out the align instructions and let map_to perform the pointcloud alignment as suggested above at https://github.com/IntelRealSense/librealsense/issues/11293#issuecomment-1371935666 I comment out the align part, icluding 3 lines: align to color #align_to = rs.stream.color ## commen out(1) #align = rs.align(align_to) ## commen out(2) def get_aligned_images(): frames = pipeline.wait_for_frames() #Apply filters pc_filtered = decimation.process() pc_filtered = depth_to_disparity.process(pc_filtered) pc_filtered = spatial.process(pc_filtered) pc_filtered = temporal.process(pc_filtered) pc_filtered = disparity_to_depth.process(pc_filtered).as_frameset() #Align the depth frame to color frame #aligned_frames = align.process(pc_filtered) ## commen out(3) aligned_depth_frame = pc_filtered.get_depth_frame() aligned_color_frame = pc_filtered.get_color_frame() img_color = np.asanyarray(aligned_color_frame.get_data()) img_depth = np.asanyarray(aligned_depth_frame.get_data()) aligned_depth_color_frame = colorizer.colorize(aligned_depth_frame) img_depth_mapped = np.asanyarray(aligned_depth_color_frame.get_data()) return img_color, img_depth, img_depth_mapped, aligned_color_frame, aligned_depth_frame, aligned_frames But in the process of calculate the 'vtx', an IndexError occurred: index 1578 is out of bounds for axis 0 with size 640, where the 1578 is from RGB while the 640 is from depth. Should I change the x and y pixel values according to the resolution rate(rate x = 640/1920, rate y = 360/1080)? def get_3d_camera_coordinate(depth_pixel, aligned_color_frame, aligned_depth_frame, aligned_frames): x = np.round(depth_pixel[1]).astype(np.int64) y = np.round(depth_pixel[0]).astype(np.int64) # 计算点云 pc.map_to(aligned_color_frame) points = pc.calculate(aligned_depth_frame) #points.export_to_ply("../frame_test.ply", aligned_color_frame) vtx = np.asanyarray(points.get_vertices()) # print('vtx_before_reshape: ', vtx.shape) # 921600 vtx = np.reshape(vtx, (360,640, -1)) # print('vtx_after_reshape: ', vtx.shape) # (720, 1280, 1) camera_coordinate = vtx[y][x][0] # print ('camera_coordinate: ',camera_coordinate) dis = camera_coordinate[2] """dis = aligned_depth_frame.get_distance( np.round(x).astype(np.int64), np.round(y).astype(np.int64)) # 获取该像素点对应的深度 # print ('depth: ',dis) # 深度单位是m camera_coordinate = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, dis) # print ('camera_coordinate: ',camera_coordinate)""" return dis, camera_coordinate Yes, I would recommend the x and y pixel values according to the resolution rate. align_to uses the RealSense SDK's 'align processing block' to automatically adjust for differences between the depth and color streams such as different resolutions. As far as I am aware, these automatic adjustments for differences do not take place with map_to. Can I understand by this way that it's better to keep the same resolution between RGB and depth(such as 1280*720) if I use 'map_to'. Because the resolution rate maybe cause the bias too? Yes, if using map_to I recommend using the same resolution for both. It's really sorry to let you waiting. I was just calculating the results. I used the same solution(1280*720) for the RGB and depth. And comment out the 'Decimation filter' and 'align_to' to keep the depth resolution. But the result is the worst until now: Would the big bias cause by the different HOV between RGB and depth?When using the 'map to' alone didn't align the RGB and depth even they have the same resolution? Currently, the original method(using 'align_to' and 'map to') is better. Is there any other method to improve the edge inaccuracy using depth-color alignment? Can I get the pointcloud using python as accurately as using realsense viewer? Comparing your code to the align_depth2color.py example that the script seems to be based on, I note that you use this line: aligned_frames = align.process(pc_filtered) Whilst in align_depth2color_py it uses frames in the brackets instead of pc_filtered. aligned_frames = align.process(frames) The frames information comes from the frames = pipeline.wait_for_frames() line a little earlier in the script. Does this mean that I can't use the filters or use the filters after align? frames = pipeline.wait_for_frames() aligned_frames = align.process(frames) or frames = pipeline.wait_for_frames() aligned_frames = align.process(frames) pc_filtered = decimation.process(aligned_frames) pc_filtered = depth_to_disparity.process(pc_filtered) pc_filtered = spatial.process(pc_filtered) pc_filtered = temporal.process(pc_filtered) pc_filtered = disparity_to_depth.process(pc_filtered).as_frameset() You can still use the filters, yes. Intel's recommendation is to place align.process after the post-processing filter list. This is a recommendation rather than a requirement though, and there are rare cases where an application has performed much better when placing align.process before the post-processing filters. I have tried using align before filters, but it's a pity that the method didn't work.Is there other methods to improve the align accuracy or depth quality? (I'm think is there any problem with the depth quality) How does the realsense viewer get the pointcloud? How can I get the pointcloud using python as accurately as using realsense viewer ? (can any setting files export from viewer and then import to python) My understanding is that the RealSense Viewer pointcloud in its 3D mode is based on pc.calculate and map_to, and does not make use of align_to. RealSense Viewer is also a C++ application rather than a Python one. You could check whether there is a mis-calibration of your camera's depth sensing by resetting it to its factory-new default calibration in the RealSense Viewer using instructions at https://github.com/IntelRealSense/librealsense/issues/10182#issuecomment-1019854487 OK, I'll calibrate the camera again. Does I just need to do the on_chip calibration, tare calibration and dynamic calibration? Except the problem of align, how can I improve the depth quality, which kind of setting is useful to reduce the volatility of rebar lines that is stright actually? Whilst on-chip calibration can be used to calibrate the camera, simply using the Viewer's factory-default calibration reset can work just as well. On-chip calibration improves depth image quality, whilst tare calibration improves depth measurement accuracy. Dynamic Calibration is a different method of calbration to on-chip that has the benefit of being able to calibrate the RGB sensor too. The grid of rebar objects has the potential to confuse the depth sensing algorithm of the camera by forming a repetitive pattern (a series of similar looking objects in horizontal and vertical arrangements, like ceiling / floor tiles). Intel have a guide at the link below to reducing the neative impact of repetitive patterns. https://dev.intelrealsense.com/docs/mitigate-repetitive-pattern-effect-stereo-depth-cameras Thank you for the links, I'll have a try! Sorry to bother again, is there any method to plot the points on the pointcloud.ply imported from map_to?(I want to check the point location on the pointcloud) Once a ply is exported then you can import it into other tools and also pointcloud processing libraries such as PCL and Open3D but not import a .ply directly back into the RealSense SDK and access its depth information. A .bag file is the best format for reading recorded depth data back into an SDK script. Is there any recommended samples for Open3D? There are some Open3D examples for RealSense at the link below. http://www.open3d.org/docs/0.12.0/tutorial/sensor/realsense.html The official RealSense documentation for the Open3D wrapper also has some Python example code. https://github.com/IntelRealSense/librealsense/tree/master/wrappers/open3d Thank you very much! I'll have a try! Hi @weicuiting Do you require further assistance with this case, please? Thanks! Case closed due to no further comments received.
gharchive/issue
2023-01-05T05:16:47
2025-04-01T06:37:05.156913
{ "authors": [ "MartyG-RealSense", "weicuiting" ], "repo": "IntelRealSense/librealsense", "url": "https://github.com/IntelRealSense/librealsense/issues/11293", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1782597612
hardware_reset generates "failed to set power state" in C++ application |---------------------------------|------------------------------------------- | | Camera Model = D455 | Firmware Version = 5.13.00.50 | Operating System & Version Linux = Ubuntu 18.04 | Kernel Version (Linux Only) = ? | Platform = nVidia Jetson Nano | SDK Version = ? | Language = C++ | | Issue Description My C++ app operates three D455 cameras. Normally it starts and runs correctly. I would like to add a feature that monitors some info and then may decide to perform a hardware reset on a single camera (not all three), and then re-enable the steams and re-start the pipeline for that one camera (leaving the other two running as they originally were). My attempt to do this is not working. All I did was run my code to determine if a reset is warranted (this part does work), and if it is, perform a hardware reset and then repeat the same code I used to originally (successfully) start up the camera. When I try the hardware_reset(), the OS throws an exception "failed to set power state" and the app stops. The reset/restart code I tried using appears below: // Below, variables g_list_of_cameras, serials, and ctx have been previously defined before this code block executes // This next block assumes that the devices of g_list_of_cameras are in the order Camera 1, 2, 3. If not, it won't work. // Note that each "for" loop should only do anything for 1 device and skip all others: int cn = 0; for (auto&& dev : g_list_of_cameras) { cn += 1; if (cn != camera) continue; cout << "resetting camera #" << camera << ", please wait ..." << endl; dev.hardware_reset(); rs2::device_hub hub(ctx); dev = hub.wait_for_device(); sleep(3); } // Re-Start streaming pipe for the reset device cn = 0; for (auto&& serial : serials) { cn += 1; if (cn != camera) continue; cout << serial << "\n\r"; rs2::pipeline pipe(ctx); rs2::config cfg; cfg.enable_device(serial); cfg.disable_all_streams(); waitKey(10); cfg.enable_stream(RS2_STREAM_COLOR, _FWIDTH, _FHEIGHT, RS2_FORMAT_BGR8, _CAM_FRAME_RATE); cfg.enable_stream(RS2_STREAM_DEPTH, _FWIDTH, _FHEIGHT, RS2_FORMAT_Z16, _CAM_FRAME_RATE); cfg.enable_stream(RS2_STREAM_INFRARED, _FWIDTH, _FHEIGHT, RS2_FORMAT_Y8, _CAM_FRAME_RATE); pipeline_profile selection = pipe.start(cfg); } Should it be possible to use hardware_reset() on a camera more than once after initial powerup? What might be wrong with what I am doing? Hi @jpfsaunders There is not a C++ example of resetting a single specific camera by its serial number, though a RealSense team member provided one for Python at https://github.com/IntelRealSense/librealsense/issues/5428#issuecomment-564482167 that takes the approach of generating a list of all attached cameras with the ctx.query_devices() instruction and then querying a serial number in that list. There is also a C++ reset script at https://github.com/IntelRealSense/librealsense/issues/9287#issuecomment-867826974 that cycles through all attached cameras. This script also uses ctx.query_devices() Hi @MartyG-RealSense , Thank you for the quick response. The Python example is exactly what I am doing in C++; for me it only works once on powerup and then if I try it a second time after that I get an error. I will look through the C++ reset script to see what they are doing but it looks more involved so it may take me a bit to figure out what is happening there. Thanks very much, @jpfsaunders - I look forward to your next report. Good luck! Hi @jpfsaunders Do you have an update about this case that you can provide, please? Thanks! Hi @MartyG-RealSense , I have not made any progress, and this particular issue has been bumper down on my priority list. The C++ script does not appear to be doing anything different than what I am already doing. About the only thing I can think of is to try to reset the USB controller on my hardware (Jetson Nano) separately, either before or at the same time as performing the hardware_reset(). I am not sure how to do that, but I will experiment once I can get back to looking at this. I would say we can close this issue. I will open a new one later if I can't figure out a workaround. Okay, thanks very much @jpfsaunders for the update.
gharchive/issue
2023-06-30T14:10:24
2025-04-01T06:37:05.166477
{ "authors": [ "MartyG-RealSense", "jpfsaunders" ], "repo": "IntelRealSense/librealsense", "url": "https://github.com/IntelRealSense/librealsense/issues/11957", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
640420719
python application crashed when import pyrealsense2 on Win7 The device is a D415(Driver verision: 2.30.0.0), and installed SDK2.0 (Intel RealSense SDK - win7 - 2.35.2.1897). The viewer works as expected. And installed python3.6.10 by miniconda, and "pip install pyrealsense2" successfully. when i tried to "import pyrealsense2", it crashed. The error is attached. https://support.intelrealsense.com/hc/user_images/JEkOxjDK-7M-hQ0fmg6Zng.jpeg thanks, @liukelinlin, could you please try to build the pyrealsense2 from source for Python3 for Win7? https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python#windows Building the SDK2 for Win7 on either WIndows 7 or Windows 10 environment: https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_windows.md https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_win7.md#building-from-source Thank you Please let us know if further assistance is needed. Thank you. The ticket will be closed in 7 days if there is no other question on the same topic.
gharchive/issue
2020-06-17T12:56:26
2025-04-01T06:37:05.171494
{ "authors": [ "RealSenseCustomerSupport", "RealSenseSupport", "liukelinlin" ], "repo": "IntelRealSense/librealsense", "url": "https://github.com/IntelRealSense/librealsense/issues/6624", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2087939904
Could not find a model in internlm/internlm-xcomposer-7b-4bit with a name in gptq_model-4bit-128g.safetensors, model.safetensors I encountered this error trying to run python examples/example_chat_4bit_en.py. Thanks for any help. FileNotFoundError: Could not find a model in internlm/internlm-xcomposer-7b-4bit with a name in gptq_model-4bit-128g.safetensors, model.safetensors. Please specify the argument model_basename to use a custom file name. Same issue here. check #51 works for me.
gharchive/issue
2024-01-18T10:05:30
2025-04-01T06:37:05.243130
{ "authors": [ "gordonhu608", "hank-nguyen" ], "repo": "InternLM/InternLM-XComposer", "url": "https://github.com/InternLM/InternLM-XComposer/issues/124", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1879424000
[Bug] ImportError: This modeling file requires the following packages that were not found in your environment: configuration_internlm. Run pip install configuration_internlm Describe the bug This bug rises recently when running the code provided on huggingface, which was not in previous version. ImportError: This modeling file requires the following packages that were not found in your environment: configuration_internlm. Run pip install configuration_internlm Environment python=3.8 transformers=4.31.0 Other information No response Unless I pass revision='' to fix a previous version. So it seems that there is something wrong with the latest version. 同样的问题
gharchive/issue
2023-09-04T03:03:51
2025-04-01T06:37:05.245661
{ "authors": [ "QichangZheng", "fairyshine" ], "repo": "InternLM/InternLM", "url": "https://github.com/InternLM/InternLM/issues/270", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2441523883
[Bug] 量化glm4-9b-chat模型报错 Checklist [X] 1. I have searched related issues but cannot get the expected help. [X] 2. The bug has not been fixed in the latest version. [X] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback. Describe the bug 量化glm4-9b-chat模型报错,量化命令如下: export HF_MODEL=/sdd/model/lmdeploy/model/glm-4-9b-chat export HF_MODEL=/sdd/model/lmdeploy/model/glm-4-9b-chat export trust_remote_code=True export HF_ENDPOINT=https://hf-mirror.com lmdeploy lite auto_awq $HF_MODEL --calib-dataset 'ptb' --calib-samples 128 --calib-seqlen 2048 --w-bits 4 --w-group-size 128 --batch-size 1 --search-scale False --work-dir $WORK_DIR Reproduction 量化glm4-9b-chat模型报错,量化命令如下: export HF_MODEL=/sdd/model/lmdeploy/model/glm-4-9b-chat export HF_MODEL=/sdd/model/lmdeploy/model/glm-4-9b-chat export trust_remote_code=True export HF_ENDPOINT=https://hf-mirror.com lmdeploy lite auto_awq $HF_MODEL --calib-dataset 'ptb' --calib-samples 128 --calib-seqlen 2048 --w-bits 4 --w-group-size 128 --batch-size 1 --search-scale False --work-dir $WORK_DIR Environment (lmdeploy) [root@localhost ~]# lmdeploy check_env /data/conda/anaconda3/envs/lmdeploy/lib/python3.8/site-packages/transformers/utils/hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( sys.platform: linux Python: 3.8.19 | packaged by conda-forge | (default, Mar 20 2024, 12:47:35) [GCC 12.3.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0: NVIDIA GeForce RTX 3060 GPU 1: Tesla T4 CUDA_HOME: /usr/local/cuda-12.2 NVCC: Cuda compilation tools, release 12.2, V12.2.128 GCC: gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3) PyTorch: 2.2.2+cu118 PyTorch compiling details: PyTorch built with: - GCC 9.3 - C++ Version: 201703 - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX512 - CUDA Runtime 11.8 - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_90,code=sm_90 - CuDNN 8.7 - Magma 2.6.1 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, TorchVision: 0.17.2+cu118 LMDeploy: 0.5.2+ transformers: 4.43.3 gradio: Not Found fastapi: 0.111.1 pydantic: 2.8.2 triton: 2.2.0 NVIDIA Topology: GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X SYS 0-11,24-35 0 GPU1 SYS X 12-23,36-47 1 Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks Error traceback No response Please set --search-scale False 我一张T4 一张RTX3060ti 共28G显存,不够量化么,为啥提示显存不足呀 看错误提示并没有用到我的GPU1 进行量化,好像只有GPU0 参与1 然后显存溢出了 量化不支持多卡,设计的时候是逐层,内部按 batch_size 1 跑的,理论上再大的模型,一块卡都够了。 可以试试标定用 --calib-seqlen 512 好像可以了 这个参数是做什么的呢 虽然量化了 但我还是跑不起来 28G显存跑9B量化理论上应该没问题的吧 加一个 --cache-max-entry-count=0.3 试试 加了参数是可以跑起来了,但是我的GPU 0确并没有参与进来,只有gpu1 进行推理,俩个显卡分别为T4和3060ti,这是为什么呢 如果加了--tp=2 参数 那么启动就会报以下错误: 目前的版本 shape 不能被均分到两块卡上,跑不了。 也就是我只能单卡运行了么 对,单卡。要想提高吞吐量,可以启两个服务,做一下负载均衡。https://lmdeploy.readthedocs.io/en/latest/serving/proxy_server.html 那单卡的话 我怎么指定哪个卡运行呢 ? 'CUDA_VISIBLE_DEVICES' 环境变量 好的 那如果只允许单卡的话 那我想跑更大的模型怎么办,比如72b的 ,也没有这么大显存的单卡呀。。。 大部分模型可以被 tp 整除,只是这个模型 shape 不行。 哦哦 好的
gharchive/issue
2024-08-01T05:37:43
2025-04-01T06:37:05.258238
{ "authors": [ "AllentDan", "MdcGIt" ], "repo": "InternLM/lmdeploy", "url": "https://github.com/InternLM/lmdeploy/issues/2210", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2278960233
🛑 eidum.no is down In 7e17bdf, eidum.no (https://eidum.no) was down: HTTP code: 0 Response time: 0 ms Resolved: eidum.no is back up in d3a67e3 after 25 minutes.
gharchive/issue
2024-05-04T11:58:15
2025-04-01T06:37:05.266116
{ "authors": [ "KindCoder-no" ], "repo": "Intus-AS/Types-status", "url": "https://github.com/Intus-AS/Types-status/issues/1318", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
688104156
Master -> Main Rename Part of https://github.com/Islandora/documentation/issues/1595 @Islandora-Devops/committers bump @rosiel :bowing_man:
gharchive/pull-request
2020-08-28T14:11:41
2025-04-01T06:37:05.374463
{ "authors": [ "dannylamb" ], "repo": "Islandora-Devops/ansible-role-matomo", "url": "https://github.com/Islandora-Devops/ansible-role-matomo/pull/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
547687673
Configuration page WSOD when Tomcat is down If Tomcat is down when visiting the general configuration page '/admin/config' will WSOD and report to the log GuzzleHttp\Exception\ConnectException: cURL error 7: Failed connect to 127.0.0.1:8080; Connection refused (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) in GuzzleHttp\Handler\CurlFactory::createRejection() (line 185 of /var/www/html/drupal/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php).. To be honest, I don't know why the general configuration page crashes when Tomcat is down, but it does. To reproduce: Spin up a box Visit `http://localhost:8000/admin/config' which should load fine. vagrant ssh sudo systemctl stop tomcat8 Visit `http://localhost:8000/admin/config' which will WSOD. Visit the recent log messages page (http://localhost:8000/admin/reports/dblog) and see the error listed. Resolved with https://github.com/Islandora/islandora/commit/2a1024c19a4b495c64fb88d4260e8d7ebea1f9b5
gharchive/issue
2020-01-09T19:55:07
2025-04-01T06:37:05.379240
{ "authors": [ "seth-shaw-unlv", "whikloj" ], "repo": "Islandora/documentation", "url": "https://github.com/Islandora/documentation/issues/1396", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1126300068
Update travis_setup_drupal.sh update scripts because of bad update to phpcs. Require 8.3.13 since 8.3.14 breaks, see https://www.drupal.org/project/coder/issues/3262291 Currently testing via this PR: https://github.com/Islandora/islandora_defaults/pull/64 @Islandora/8-x-committers This is holding up at least 3 PR's right now: https://github.com/Islandora/controlled_access_terms/pull/78 https://github.com/Islandora/islandora_defaults/pull/64 https://github.com/Islandora/islandora/pull/862 Using the examples I was able to determine this should work.
gharchive/pull-request
2022-02-07T17:40:33
2025-04-01T06:37:05.382672
{ "authors": [ "DonRichards", "rosiel" ], "repo": "Islandora/islandora_ci", "url": "https://github.com/Islandora/islandora_ci/pull/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2098338302
[ANDROID] - Notifications do not trigger onForegroundEvent Notifications do not trigger onForegroundEvent (Android) I did several tests, I do not understand how to solve it. In my project I am using iterable, firebase messaging and notifee. the question is that when a new notification arrives from iterable I receive it from firebase and it is shown automatically. In android when I press that notification with the app open, nothing happens, no notifee event is executed where I can give an action in the code, the opposite happens in IOs, if the onForegroundEvent function of notifee is executed. How can I implement an onPress in android, or how can I solve the problem? I don't know if this issue should go here or in another of the packages I mentioned. I am going to show the versions of the libraries and how I implemented it. "@iterable/react-native-sdk": "1.3.17", "@notifee/react-native": "7.8.2", "@react-native-firebase/analytics": "18.7.3", "@react-native-firebase/app": "18.7.3", "@react-native-firebase/crashlytics": "18.7.3", "@react-native-firebase/dynamic-links": "18.7.3", "@react-native-firebase/messaging": "18.7.3", "@react-native-firebase/perf": "18.7.3", "@react-native-firebase/remote-config": "18.7.3" // Firebase notification events setup FirebaseMessaging().setBackgroundMessageHandler(handleRemoteNotification); FirebaseMessaging().onMessage(handleRemoteNotification); FirebaseMessaging().onTokenRefresh(updatePushNotificationToken); FirebaseMessaging().setAutoInitEnabled(true); // Push android notifications setup await notifee.createChannel({ ...channel, vibration: true, importance: AndroidImportance.HIGH, sound: 'default', }); // Handle onNotificationPress while the app is running notifee.onForegroundEvent(({ type, detail }) => { console.log('\n\n', '# foreground', type, '\n\n'); if (!detail.notification) return; switch (type) { case EventType.PRESS: case EventType.ACTION_PRESS: handlePushNotificationPressed(detail.notification); } }); // Handle onNotificationPress while the app is closed notifee.onBackgroundEvent(async ({ type, detail }) => { console.log('\n\n', '# background', type, '\n\n'); if (!detail.notification) return; switch (type) { case EventType.PRESS: case EventType.ACTION_PRESS: { handlePushNotificationPressed(detail.notification); // Remove the notification if (detail.notification?.id) { await notifee.cancelNotification(detail.notification.id); } } } }); // Iterable setup const config = new IterableConfig(); config.autoPushRegistration = true; const initialized = await Iterable?.initialize( env.iterableApiKey, config, ); Iterable?.setEmail(user); the code looks too simple, it is worth noting that when I receive an event from firebase and the notification is iterable I do not show it a second time with notifee EDIT 1: to explain a little bit about the execution flow I send a notification from iterable with the app open but in the background. Firebase catches it and executes onMessage or the callback of setBackgroundMessageHandler As it is a notification from iterable, I don't execute any action with Notifee, otherwise it would be duplicated. Then if I press that notification, it opens the application but nothing happens, no Notifee event is executed. I was testing and the difference I can find between this notification and the one I launch from notifee is the channelID, which is the firebase default, although I don't think it has any relevance fcm_fallback_notification_channel Hi @LcsGrz, thanks for reaching out. Can you test extending your Firebase service to the Iterable service so it can forward onMessageReceived and onNewToken calls to IterableFirebaseMessagingService.handleMessageReceived and IterableFirebaseMessagingService.handleTokenRefresh, respectively? Reference: https://support.iterable.com/hc/en-us/articles/360035019712-Iterable-s-Android-SDK#handling-firebase-push-messages-and-tokens @jena-chakour Hi :D , I hope you are doing well, I was testing but I am not so sure how I can debug this in android, what I did was to create the java file in the android folder and in the manifest I added the following line <service android:name=".MyFirebaseMessagingService" /> package com.p; import android.util.Log; import com.google.firebase.messaging.FirebaseMessagingService; import com.google.firebase.messaging.RemoteMessage; import com.iterable.iterableapi.IterableFirebaseMessagingService; import java.lang.reflect.Field; import java.lang.reflect.Modifier; public class MyFirebaseMessagingService extends FirebaseMessagingService { @Override public void onMessageReceived(RemoteMessage remoteMessage) { Log.d( "onMessageReceived"); IterableFirebaseMessagingService.handleMessageReceived(this, remoteMessage); } @Override public void onNewToken(String s) { Log.d( "onNewToken"); IterableFirebaseMessagingService.handleTokenRefresh(); } } and then run npx react-native log-android but nothing happens :(
gharchive/issue
2024-01-24T14:03:25
2025-04-01T06:37:05.398373
{ "authors": [ "LcsGrz", "jena-chakour" ], "repo": "Iterable/react-native-sdk", "url": "https://github.com/Iterable/react-native-sdk/issues/530", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1147470594
Run command for generating paylond with bash This is just for testing to see if the workflows succeed here. It needs to be in a PR because otherwise the workflow isn't trigerred. works, yey
gharchive/pull-request
2022-02-22T23:28:40
2025-04-01T06:37:05.400816
{ "authors": [ "ItsDrike" ], "repo": "ItsDrike/mcstatus", "url": "https://github.com/ItsDrike/mcstatus/pull/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1077823138
to lowercase folder names all done
gharchive/issue
2021-12-12T14:40:01
2025-04-01T06:37:05.407546
{ "authors": [ "IvanVnucec" ], "repo": "IvanVnucec/cubesat-adcs", "url": "https://github.com/IvanVnucec/cubesat-adcs/issues/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
164736615
Why should I use it very slow? My data is just a few thousand。 not helpful in any way, no details plus, performance depends largely on the back-end store you're using @Ivshti i use it in electron. no back-end store。 you can't possibly use it without a back-end store, as linvodb3 cannot be used without a back-end store
gharchive/issue
2016-07-10T22:04:46
2025-04-01T06:37:05.413897
{ "authors": [ "Ivshti", "milu2003" ], "repo": "Ivshti/linvodb3", "url": "https://github.com/Ivshti/linvodb3/issues/50", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
680785601
Can't extract pretrained models Hi, We tried to extract the pretrained models from googledrive but when we are going to open them in both linux and windows, they appear to be corrupted. Hi, you don't need to extract the checkpoint which can directly loaded by pytorch.
gharchive/issue
2020-08-18T08:02:27
2025-04-01T06:37:05.484788
{ "authors": [ "AIprogrammer", "FabioTarocco" ], "repo": "JDAI-CV/Down-to-the-Last-Detail-Virtual-Try-on-with-Detail-Carving", "url": "https://github.com/JDAI-CV/Down-to-the-Last-Detail-Virtual-Try-on-with-Detail-Carving/issues/26", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
163276021
Remove Custom Select Box http://jdrf.github.io/design-system/dist/components.html#forms @fuhton PR submitted: https://github.com/JDRF/design-system/pull/273
gharchive/issue
2016-06-30T21:57:26
2025-04-01T06:37:05.486198
{ "authors": [ "RachelRVasquez", "fuhton" ], "repo": "JDRF/design-system", "url": "https://github.com/JDRF/design-system/issues/270", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }