#17677 closed bug (fixed)
HDS and website's "Packages" tab not being updated
Reported by: | humdinger | Owned by: | apl-haiku |
---|---|---|---|
Priority: | normal | Milestone: | |
Component: | Website/HaikuDepotServer | Version: | R1/Development |
Keywords: | Cc: | ||
Blocked By: | Blocking: | ||
Platform: | All |
Description
No new/updated packages appear at the HDS site or the "Packages" tab on the website or in the HaikuDepot app. Last entry is from 15th March 2022.
Change History (7)
comment:1 by , 3 years ago
comment:2 by , 3 years ago
I think this might be a relevant bit:
2022-03-27 09:29:48,803 / [job-run-d...] ERROR o.h.h.r.j.RepositoryHpkrIngressJobRunner - a problem has arisen processing a repository file for repository source [haikuports_x86_gcc2] org.haiku.haikudepotserver.repository.model.RepositoryHpkrIngressException: a problem has arisen parsing or dealing with the 'repo.info' file at org.haiku.haikudepotserver.repository.job.RepositoryHpkrIngressJobRunner.runImportInfoForRepositorySource(RepositoryHpkrIngressJobRunner.java:195) at org.haiku.haikudepotserver.repository.job.RepositoryHpkrIngressJobRunner.runForRepositorySource(RepositoryHpkrIngressJobRunner.java:120) at org.haiku.haikudepotserver.repository.job.RepositoryHpkrIngressJobRunner.lambda$run$1(RepositoryHpkrIngressJobRunner.java:102) at org.apache.cayenne.tx.DefaultTransactionManager$BaseTransactionHandler.performInTransaction(DefaultTransactionManager.java:183) at org.apache.cayenne.tx.DefaultTransactionManager$BaseTransactionHandler.performInNewTransaction(DefaultTransactionManager.java:155) at org.apache.cayenne.tx.DefaultTransactionManager$NestedTransactionHandler.handle(DefaultTransactionManager.java:98) at org.apache.cayenne.tx.DefaultTransactionManager.performInTransaction(DefaultTransactionManager.java:65) at org.apache.cayenne.tx.DefaultTransactionManager.performInTransaction(DefaultTransactionManager.java:43) at org.apache.cayenne.configuration.server.ServerRuntime.performInTransaction(ServerRuntime.java:90) at org.haiku.haikudepotserver.repository.job.RepositoryHpkrIngressJobRunner.lambda$run$2(RepositoryHpkrIngressJobRunner.java:100) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) at org.haiku.haikudepotserver.repository.job.RepositoryHpkrIngressJobRunner.run(RepositoryHpkrIngressJobRunner.java:99) at org.haiku.haikudepotserver.repository.job.RepositoryHpkrIngressJobRunner.run(RepositoryHpkrIngressJobRunner.java:58) at org.haiku.haikudepotserver.job.LocalJobServiceImpl.runSpecificationInCurrentThread(LocalJobServiceImpl.java:299) at org.haiku.haikudepotserver.job.LocalJobServiceImpl.lambda$createInternalJobBySubmittingToExecutor$8(LocalJobServiceImpl.java:257) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.ConnectException: null at java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:561) at java.net.http/jdk.internal.net.http.HttpClientFacade.send(HttpClientFacade.java:119) at org.haiku.haikudepotserver.support.FileHelper.streamHttpUriDataToFile(FileHelper.java:56) at org.haiku.haikudepotserver.support.FileHelper.streamUrlDataToFile(FileHelper.java:36) at org.haiku.haikudepotserver.repository.job.RepositoryHpkrIngressJobRunner.runImportInfoForRepositorySource(RepositoryHpkrIngressJobRunner.java:150) ... 27 common frames omitted Caused by: java.net.ConnectException: null at java.net.http/jdk.internal.net.http.common.Utils.toConnectException(Utils.java:1021) at java.net.http/jdk.internal.net.http.PlainHttpConnection.connectAsync(PlainHttpConnection.java:179) at java.net.http/jdk.internal.net.http.Http1Exchange.sendHeadersAsync(Http1Exchange.java:238) at java.net.http/jdk.internal.net.http.Exchange.lambda$responseAsyncImpl0$8(Exchange.java:435) at java.net.http/jdk.internal.net.http.Exchange.checkFor407(Exchange.java:367) at java.net.http/jdk.internal.net.http.Exchange.lambda$responseAsyncImpl0$9(Exchange.java:439) at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) at java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946) at java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2266) at java.net.http/jdk.internal.net.http.Exchange.responseAsyncImpl0(Exchange.java:439) at java.net.http/jdk.internal.net.http.Exchange.responseAsyncImpl(Exchange.java:343) at java.net.http/jdk.internal.net.http.Exchange.responseAsync(Exchange.java:335) at java.net.http/jdk.internal.net.http.MultiExchange.responseAsyncImpl(MultiExchange.java:347) at java.net.http/jdk.internal.net.http.MultiExchange.lambda$responseAsyncImpl$7(MultiExchange.java:388) at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) at java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946) at java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2266) at java.net.http/jdk.internal.net.http.MultiExchange.responseAsyncImpl(MultiExchange.java:378) at java.net.http/jdk.internal.net.http.MultiExchange.lambda$responseAsync0$2(MultiExchange.java:293) at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1705) at java.net.http/jdk.internal.net.http.HttpClientImpl$DelegatingExecutor.execute(HttpClientImpl.java:153) at java.base/java.util.concurrent.CompletableFuture.completeAsync(CompletableFuture.java:2591) at java.net.http/jdk.internal.net.http.MultiExchange.responseAsync(MultiExchange.java:246) at java.net.http/jdk.internal.net.http.HttpClientImpl.sendAsync(HttpClientImpl.java:632) at java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:540) ... 31 common frames omitted Caused by: java.nio.channels.UnresolvedAddressException: null at java.base/sun.nio.ch.Net.checkAddress(Net.java:131) at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:673) at java.net.http/jdk.internal.net.http.PlainHttpConnection.lambda$connectAsync$0(PlainHttpConnection.java:165) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.net.http/jdk.internal.net.http.PlainHttpConnection.connectAsync(PlainHttpConnection.java:167) ... 56 common frames omitted
comment:3 by , 3 years ago
Not quite sure what is happening here, I can use curl from within the container to fetch the data at the url.
comment:4 by , 3 years ago
Actually this looks wrong:
2022-03-27 09:29:48,787 / [job-run-d...] INFO o.h.h.r.j.RepositoryHpkrIngressJobRunner - will copy data for repository info [RepositorySource[code=haikuports_x86_gcc2]] (http://www:80/haikuports/master/x86_gcc2/current/repo.info) to temporary file
The internal base URL on the details page seems wrong.
comment:5 by , 3 years ago
Hi @nielx; when the system was running in Docker Swarm, HDS was unable to connect from inside the Docker network back into the Docker Swarm (inside network --> outside network --> inside network) applications for some reason. It needs to do this to query the repo.info
and the repo
file from the repository. This networking problem was unable to be resolved and so an optional "forced internal base URL" property was added to the Repository record. Should this property be defined then HDS will use that URL instead to fetch data for the repository rather than the public base URL.
Most likely the configured "forced internal base URL" is no longer working with the K8S environment. Either remove the "forced internal base URL" to see if HDS is able to call back in through the K8S ingress or modify the "forced internal base URL" as required so that HDS is able to access the repository data inside the K8S network.
You can edit this in the HDS GUI as an admin user.
comment:6 by , 3 years ago
Milestone: | Unscheduled |
---|---|
Resolution: | → fixed |
Status: | new → closed |
Setting the internal URL to blank worked. I triggered a manual update and it looks like the sync was succesful.
Thanks for the help!
comment:7 by , 3 years ago
ah, yeah. www was the internal reference within docker swarm.
We may need to adjust it back to www (or whatever the haikuports service name will be once migrated over), however for now this seems fine.
Thanks for looking at this nielx!!
Discussing on the Haiku sys-admin list...