Search Results

Search found 8125 results on 325 pages for 'metadata cache'.

Page 121/325 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • How best to modernize the 2002-era J2EE app?

    - by user331465
    I have this friend.... I have this friend who works on a java ee application (j2ee) application started in the early 2000's. Currently they add a feature here and there, but have a large codebase. Over the years the team has shrunk by 70%. [Yes, the "i have this friend is". It's me, attempting to humorously inject teenage high-school counselor shame into the mix] Java, Vintage 2002 The application uses EJB 2.1, struts 1.x, DAO's etc with straight jdbc calls (mixture of stored procedures and prepared statements). No ORM. For caching they use a mixture of OpenSymphony OSCache and a home-grown cache layer. Over the last few years, they have spent effort to modernize the UI using ajax techniques and libraries. This largely involves javascript libaries (jquery, yui, etc). Client Side On the client side, the lack of upgrade path from struts1 to struts2 discouraged them from migrating to struts2. Other web frameworks became popular (wicket, spring , jsf). Struts2 was not the "clear winner". Migrating all the existing UI from Struts1 to Struts2/wicket/etc did not seem to present much marginal benefit at a very high cost. They did not want to have a patchwork of technologies-du-jour (subsystem X in Struts2, subsystem Y in Wicket, etc.) so developer write new features using Struts 1. Server Side On the server side, they looked into moving to ejb 3, but never had a big impetus. The developers are all comfortable with ejb-jar.xml, EJBHome, EJBRemote, that "ejb 2.1 as is" represented the path of least resistance. One big complaint about the ejb environment: programmers still pretend "ejb server runs in separate jvm than servlet engine". No app server (jboss/weblogic) has ever enforced this separation. The team has never deployed the ejb server on a separate box then the app server. The ear file contains multiple copies of the same jar file; one for the 'web layer' (foo.war/WEB-INF/lib) and one for the server side (foo.ear/). The app server only loads one jar. The duplications makes for ambiguity. Caching As for caching, they use several cache implementations: OpenSymphony cache and a homegrown cache. Jgroups provides clustering support Now What? The question: The team currently has spare cycles to to invest in modernizing the application? Where would the smart investor spend them? The main criteria: 1) productivity gains. Specifically reducing the time to develope new subsystems features and reduced maintenance. 2) performance/scalability. They do not care about fashion or techno-du-jour street cred. What do you all recommend? On the persistence side Switch everything (or new development only) to JPA/JPA2? Straight hibernate? Wait for Java EE 6? On the client/web-framework side: Migrate (some or all) to struts2? wicket? jsf/jsf2? As for caching: terracotta? ehcache? coherence? stick with what they have? how best to take advantage of the huge heap sizes that the 64-bit jvms offer? Thanks in advance.

    Read the article

  • Ubuntu can't install an older version of a package

    - by Trevor Newhook
    When I try to do an apt-get install, I keep getting an error: Depends: libgtk-3-common (= 3.4.1-0ubuntu1) but 3.4.2-0ubuntu0.4 is to be installed when I run sudo apt-get -f install, I get several dpkg: warning: files list file for package 'XXX' missing, assuming package has no files currently installed. then Preparing to replace libgtk-3-bin 3.4.1-0ubuntu1 (using .../libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb) ... Adding 'diversion of /usr/sbin/update-icon-caches to /usr/sbin/update-icon-caches.gtk2 by libgtk-3-bin' dpkg-divert: error: rename involves overwriting `/usr/sbin/update-icon-caches.gtk2' with different file `/usr/sbin/update-icon-caches', not allowed dpkg: error processing /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure why it's complaining about a newer version of a package, but any help would be appreciated

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • error while installing binutils in LFS

    - by user53347
    lfs:/mnt/lfs/sources/binutils-build$ ../binutils-2.15.94.0.2.2/configure \ --target=$LFS_TGT --prefix=/tools \ --disable-nls --disable-werror loading cache ./config.cache checking host system type... i686-pc-linux-gnuoldld checking target system type... i686-lfs-linux-gnu checking build system type... i686-pc-linux-gnuoldld checking for a BSD compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for gcc... no checking for cc... no configure: error: no acceptable cc found in $PATH

    Read the article

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high

    - by Gk.
    I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • Configure fallback redis server

    - by snøreven
    I am using redis as a cache server. Can I somehow configure multiple redis servers, that the cache is fully functional (read/write) even if some of them go offline? I looked into master-slave, but the problem I see there is, that if the master fails, and I allow writes to the slaves, they get overwritten once the master is up again. Now the master just serves the old data. The only solution I could come up was disabling write-to-disc, but that sucks as I loose everything if I have to restart the master. And I guess, slaves wouldn't be synced anymore if the master is gone.

    Read the article

  • Easiest way to combine a dataset and other data in a single file?

    - by Tim Gradwell
    I have a dataset in C# which I can serialise using dataset.WriteXml(filename); but I want the file to contain other data as well (essentially some meta data about the dataset). I could add another table to the dataset which contained the metadata, but I'd prefer to keep it separate, if at all possible. Essentially I think I want to create a 'combination of files' file, that looks something like this: size_of_file1 file1 size_of_file2 file2 ... etc Then, I'd like to load the file into memory, and split the file into separate streams, so that I can feed the dataset into dataset.ReadXml(stream); and the metadata into something else. Sound possible? Can anyone tell me how I can do it? Thanks Tim

    Read the article

  • free -m output, should I be concerend about this servers low memory?

    - by Michael
    This is the output of free -m on a production database (MySQL with machine. 83MB looks pretty bad, but I assume the buffer/cache will be used instead of Swap? [admin@db1 www]$ free -m total used free shared buffers cached Mem: 16053 15970 83 0 122 5343 -/+ buffers/cache: 10504 5549 Swap: 2047 0 2047 top ouptut sorted by memory: top - 10:51:35 up 140 days, 7:58, 1 user, load average: 2.01, 1.47, 1.23 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 6.5%us, 1.2%sy, 0.0%ni, 60.2%id, 31.5%wa, 0.2%hi, 0.5%si, 0.0%st Mem: 16439060k total, 16353940k used, 85120k free, 122056k buffers Swap: 2096472k total, 104k used, 2096368k free, 5461160k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20757 mysql 15 0 10.2g 9.7g 5440 S 29.0 61.6 28588:24 mysqld 16610 root 15 0 184m 18m 4340 S 0.0 0.1 0:32.89 sysshepd 9394 root 15 0 154m 8336 4244 S 0.0 0.1 0:12.20 snmpd 17481 ntp 15 0 23416 5044 3916 S 0.0 0.0 0:02.32 ntpd 2000 root 5 -10 12652 4464 3184 S 0.0 0.0 0:00.00 iscsid 8768 root 15 0 90164 3376 2644 S 0.0 0.0 0:00.01 sshd

    Read the article

  • APC serving old code intermittently running with Lighttpd and PHP Fast CGI

    - by APZ
    I recently started facing this problem that APC shows old code when we upload a html template file to fix/ change something on our websites. We run APC with Stat=0 and want to keep it that way because we seldomly make changes to templates. Every time we upload a template we make sure to flush APC cache and we execute this script(shown only some part of the script here) to clear the cache: apc_clear_cache(); apc_clear_cache('user'); apc_clear_cache('system'); apc_clear_cache(opcode); We use lightpd and PHP Fast CGI and fast cgi has "max-procs" = 2, "PHP_FCGI_CHILDREN" = "5", Even after flushing APC once upload is complete it serves the old template intermittently. Any help would be appreciated.

    Read the article

  • Import part with specific meta data using MEF Preview 5

    - by Riri
    I have a export defined as as follows in MEF preview 5 [ExportMetadata("Application", "CheckFolderApplication")] [Export(typeof(ExtendedArtifactBase))] public class CheckFolderArtifact2 : ExtendedArtifactBase { ... Then I only want those imports with the "Application" "CheckFolderApplication" metadata. To currenly do that I read all the imports and then filter the result. [Import(typeof(ExtendedApplicationBase))] private ExportCollection<IApplication> _applications { get; set; } public IApplication GetApplication(string applicationName) { return _applications.Single(a => a.GetExportedObject().Name == applicationName).GetExportedObject(); } This feels really inefficient. What if I have thousands of plug-ins - do I have to read them all via MEF to just get one with the right metadata? If so how do you cache the result?

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • Am I using too much memory? (Rails on EC2 with Resque)

    - by Stpn
    I am looking at the memory usage of the Rails application (it uses background processes via Resque) and since the common answer to the question, "how many workers is too many" was "test and see", I ran some memory commands and wonder if someone can help figuring if the memory usage is high enough already, or I can still add some extra workers.. so (this is all under the maximum load): $ free -t -m total used free shared buffers cached Mem: 1756 1532 223 0 12 229 -/+ buffers/cache: 1291 464 Swap: 895 10 885 Total: 2652 1543 1108 $ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 10588 156172 13400 326476 1 6 4 0 5 4 1 0 99 0 If there is any extra info I can provide to help answer this, I would be happy to do so. If the question is strange in some way, please let me know I'd be glad to fix etc..

    Read the article

  • Lots of files being used by blank web page. What are they?

    - by byronyasgur
    I am trying to optimise a website and I was using the network waterfall facility in Google Chrome. When I looked at the results there were lots of files which I didnt recognise. I first thought they might be something to do with Google Chrome itself, so I put a blank HTML file on my desktop and checked but there was nothing in the waterfall except the file itself. So I put a blank file on my server and I got the output below. What are all these files, are they all necessary, is this normal and do I need to be in any way concerned. My hosting provider has always been excellent in every regard that I'm aware of. My host is shared hosting, using cpanel and is based on a LAMP server. I also note that a couple of those file have problems but I have no idea how to fault find that or whether it's a concern. EDIT: I have cleared the cache so I don't think it's a browser cache issue.

    Read the article

  • Empty dict-like collection problem in SQLAlchemy

    - by maksymko
    I have a mapping in SQLAlchemy that looks like this: t_property_value = sa.Table('property_value', MetaData, autoload = True, autoload_with = engine) orm.mapper(PropertyValue, t_property_value) t_estate = sa.Table('estate', MetaData, autoload = True, autoload_with = engine) orm.mapper(Estate, t_estate, properties = dict( property_hash = orm.relation(PropertyValue, collection_class = column_mapped_collection(t_property_value.c.property_id)) )) Now, everything seems to be fine, when I load the Estate object and it has some relations to PropertyValue objects. However, when it does not, then property_hash attribute is None, instead of being something dict-like, so I can not add new relations like this: estate.property_hash[prop_id] = PropertyValue(...) because I get the "'NoneType' object does not support item assignment" error. So, is there any way to force SQLAlchemy to create proper empty collection?

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Best usb storage for my router, Asus RT-AC66U?

    - by Jason94
    I have the ASUS RT-AC66U and I want to add a USB storage to it. It has 2x USB, and Im already using one for my printer. So the last one I want to use to attach a USB storage, and I've read some reviews stating the throughput of the USB could be up to 18 mb/s. So in regard of USB storage, should I care about hard disk cache? Simple powered-over-usb seems to have 8 mb cache, other (externally powered) has 16 for instance.

    Read the article

  • Many-to-many relationship on same table with association object

    - by Nicholas Knight
    Related (for the no-association-object use case): http://stackoverflow.com/questions/1889251/sqlalchemy-many-to-many-relationship-on-a-single-table Building a many-to-many relationship is easy. Building a many-to-many relationship on the same table is almost as easy, as documented in the above question. Building a many-to-many relationship with an association object is also easy. What I can't seem to find is the right way to combine association objects and many-to-many relationships with the left and right sides being the same table. So, starting from the simple, naïve, and clearly wrong version that I've spent forever trying to massage into the right version: t_groups = Table('groups', metadata, Column('id', Integer, primary_key=True), ) t_group_groups = Table('group_groups', metadata, Column('parent_group_id', Integer, ForeignKey('groups.id'), primary_key=True, nullable=False), Column('child_group_id', Integer, ForeignKey('groups.id'), primary_key=True, nullable=False), Column('expires', DateTime), ) mapper(Group_To_Group, t_group_groups, properties={ 'parent_group':relationship(Group), 'child_group':relationship(Group), }) What's the right way to map this relationship?

    Read the article

  • Debug unstable Apache server under Debian

    - by almo
    Since yesterday my Apache server that runs on a Debian machine runs very unstable. Sometiems my websites load and sometimes not. I think it has to do with the memory since my Apache log is full of Out of memory (allocated 262144) (tried to allocate 4480 bytes). I also attached a screenshot of the memory graph. A server restart resolves the problem temporarily. I looked at the processes that are using memory but the biggest one is MySQL with 6.5%. Where else can look for the problem? Edit: I did a free -m right after rebooting and one about 2 hours later. I think the trend is visible: root@xxx:~# free -m total used free shared buffers cached Mem: 4016 731 3284 0 80 200 -/+ buffers/cache: 449 3566 Swap: 459 0 459 root@xxx:~# free -m total used free shared buffers cached Mem: 4016 2466 1550 0 92 473 -/+ buffers/cache: 1900 2115 Swap: 459 0 459

    Read the article

  • Squid throws error, The requested URL could not be retrieved

    - by Supratik
    Hi Sometimes I am getting the following error The requested URL could not be retrieved While trying to retrieve the URL: http://groups.google.com/ The following error was encountered: Unable to determine IP address from host name for groups.google.com The dnsserver returned: Refused: The name server refuses to perform the specified operation. This means that: The cache was not able to resolve the hostname presented in the URL. Check if the address is correct. Your cache administrator is root. What could be the reason for the above error ? Regards Supratik

    Read the article

  • VMware Workstation 7&8&9 does not generate /etc/vmware/network upon installation

    - by dash17291
    When I install VMware Workstation on Arch linux Virtual ethernet is not working. $ sudo tail /var/log/vnetlib Aug 28 22:20:33 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:33 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:33 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:33 VNLNetCfgUnload - Requested cache is not loaded Database file is not present. Failed to initialize Aug 28 22:20:41 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:41 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:41 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:41 VNLNetCfgUnload - Requested cache is not loaded Required modules compiled. Previously I have copied that file or directory (I don't remember) from a working installation, but now I need a real solution. It's strange for me, may be a hardware issue also because with Ubuntu the same thing happens on the same computer.

    Read the article

  • ASP Fails with 500 Error

    - by VinceM
    We have a server setup as an IIS box and have some static pages with a few asp pages that handle the form submissions. The asp is really vbscript that sends a CDO message. When moving these pages to the new server the form will not submit, it gives a 500 error and the following shows in Event Viewer: Error: The Template Persistent Cache initialization failed for Application Pool 'DefaultAppPool' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. I can't seem to find any info on this anywhere... I was thinking it may have something to do with the fact that we created this server from an image of another server. Thanks for your help in advance... Vince

    Read the article

  • Odd behavior of permissions and icons

    - by Urban
    After some virus had infected my Windows 7, I did a complete format and re-installed the OS. I was just installing applications and copying back some data when I noticed some shortcuts changing their icons to something I couldnt recognize (Yellow icons in the image below). Also, a few exe files which previously did not ask for User permission, are now asking for it. Wondering if this is an icon cache problem, I cleared the cache by deleting the IconCache.db in AppData/Local but the problem still persists. Although I did a full system scan with MS Security Essentials, I'm not sure if this is another virus or some other problem. I would appreciate any suggestions you might have. Edit: Now even Firefox needs permissions to launch. It's icon hasnt changed, but it's got the 'shield' overlay on it like the other yellow icons.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >