Search Results

Search found 9542 results on 382 pages for 'operating systems'.

Page 358/382 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • What do you expect from a package manager for Emacs

    - by tarsius
    Although several hundred Emacs Lisp libraries exist GNU Emacs does not have an (internal) package manager. I guess that most users would agree that it is currently rather inconvenient to find, install and especially keep up-to-date Emacs Lisp libraries. These pages make life a bit easier Emacs Lisp List - Problem: I see dead people (links). Emacswiki - Problem: May contain traces of nuts (malicious code). These are some package managers XEmacs package manager package.el - ELPA pases install.el install-elisp.el plugin.el use-package.el jem-pkg.el epkg/elm - the one I am working. And this are some packages that provide functionality that might be useful in a package manager ell.el - Browse the Emacs Lisp List genauto.el - helps generate autoloads for your elisp packages date-calc.el - date calculation and parsing routines strptime.el - partial implementation of POSIX date and time parsing wikirel.el - Visit relevant pages on the Emacs Wiki loadhist.el, lib-requires.el, elisp-depend.el - Commands to list Emacs Lisp library dependencies. project-root.el - Define a project root and take actions based upon it So I would like to know from you what you consider important/unimportant/supplementary... in a package manager for Emacs. Some ideas Many packages (incorporate the Emacs Lisp List and other lists of libraries). Only packages that have been tested. Support for more than one package archive (so people can choose between many/tested packages). Dependency calculated based on required features only. Dependencies take particular versions into account. Only use versions that have been released upstream. Use versions from version control systems if available. Packages are categorized. Packages can be uninstalled and updated not only installed. Support creating fork of upstream version of packages. Support publishing these forks. Support choosing a fork. After installation packages are activated. Generate autoloads. Integration with Emacswiki (see wikirel.el). Users can tag, comment ... packages and share that information. Only FSF-assigned/GPL/FOSS software or don't care about license. Package manager should be integrated in Emacs. Support contacting author. Lots of metadata. Suggest alternatives before installing a particular package. Some discussions about the subject at hand emacs-devel 20080801 comp.emacs 20021121 RationalElispPackaging I am hoping for these kinds of answers Pointers to more implementations, discussions etc. Lengthy descriptions of a set of features that make up your ideal package manager. Descriptions of one particular disired/undisired feature. This has the advantage that the regular voting mechanism allows us to see what features are most welcomed. Feel free to elaborate on my ideas from above. Surprise me.

    Read the article

  • how to display these text on blackberry and how to show the hyperlinks

    - by Changqi
    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Strict//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>A history of Canoe Cove /</title> </head> <body> <div class="tei"> <p> A History of </p> <p> The General Stores </p> <p> There were several general stores in our <a class="search orgName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.orgNameTERM:%22Cove%22+AND+dc.type:collection">Cove</a> at different times. The one that lasted longest was at the Corner across from the school and it had many owners. Who established it is unclear but John MacKenzie, the piper, who was also a shoe maker lived there. He was a relative of the present day MacKenzies of <a class="search placeName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.placeNameTERM:%22Canoe Cove%22+AND+dc.type:collection">Canoe Cove</a>. William MacKay who married Christena MacLean was operating it when it burned down and a store which had belonged to Neil "Cooper" MacLean was moved across to the site. This was later bought by <span class="persName"><a class="search persName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.persNameTERM:%22MacCannell+Neil%22+AND+dc.type:collection"> Neil MacCannell </a></span> of Long Creek , a schoolteacher who taught in the <a class="search orgName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.orgNameTERM:%22Cove%22+AND+dc.type:collection">Cove</a> for a few years. <span class="persName"><a class="search persName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.persNameTERM:%22MacNevin+Hector%22+AND+dc.type:collection"> Hector MacNevin </a></span> from <a class="search placeName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.placeNameTERM:%22St. Catherines%22+AND+dc.type:collection">St. Catherines</a> operated it for a year while it still belonged to <span class="persName"><a class="search persName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.persNameTERM:%22MacCannell+Neil%22+AND+dc.type:collection"> Neil MacCannell </a></span> because Neil had accepted a job in <a class="search placeName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.placeNameTERM:%22Charlottetown%22+AND+dc.type:collection">Charlottetown</a> as clerk of the Court. Later Mrs. John Angus Darrach bought it and she and her son George ran it for years until both had health problems, and had to close the store after which closing it never reopened. After George died and his wife Hazel moved to Montague to live with her family the building was sold to Robert Patterson . Rob lived in it for a few years, making many improvements then sold it to Kirk McAleer. </p> </div>

    Read the article

  • Personal Project - Next practical language/tech to learn

    - by Paul Nathan
    I'm working on a personal project doing some finance analysis. It's a totally new field for me, and I'm really having fun with it so far, plus working in the high-level language arena is a great break from my embedded systems daytime work. I have a MySQL backend on a non-local server with a pile of stock data. My task now is to do some analysis of the stocks and produce something approximating a useful result. There are a couple technical difficulties. (1) I have a lot of records. To be precise, I believe I'm near 100K records right now, and this number grows by 6.1K each weekday. I need to create a way to rummage through these fields and do data analysis - based on a given computation, go look at this other set. Fine and dandy, nothing too outre. But this means I could really use a straightforward API for talking to MySQL. (2) Ideally, it runs on OS X 10.4.11. No Windows/Linux machine at home. (3) I can use PHP, C++, Perl, etc. I even have an R installation. I'm pretty flexible with stuff, so long as it runs on OS X. (Lots of options here, pick water, H20, or dihydrogen monoxide ;-) ) (4)Lack of hassle. While I like clever and fun ways of doing things, I'm trying to get some analysis done, not spend ten hours doing installation work and scratching my head figuring out a theoretical syntax question needed to spout out "hello world". What's the question? I'd like to dig into something different than my usual PHP/C++/C toolset. I'm looking for recommendations for languages/technologies that will assist me and meet the above requirements. In particular, I've heard a lot of buzz about F# and Python on SO. I've used CLISP for small problems before, and kinda liked it. I'm seeking opinions about those in particular. edit:since I rent the DB server and have a limited amount of CPU time online, I'm trying to do the analysis on a local machine.

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections [closed]

    - by Hammer Bro.
    Note: I accidentally originally posted this question over at SuperUser, and I still think the issue is caused by some low-level networking practice of Windows 7, but I think the expertise here would be more apt to figuring it out. Apologies for the cross-post. Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • Problem getting Java Streams in HP Tandem (Non-Stop)

    - by AndreaG
    Hi. We are porting a simple Java application between Tandem NonStop systems, from G-Series to H-Series. Java version is 1.5.0_02. When performing basic I/O tasks like getting output stream from or opening a client socket, we receive exceptions like java.io.IOException: Value out of range or java.net.SocketException: Value out of range ("value out of range" is Tandem native jargon for, well, quite everything I suppose). Has anybody got similar issues? i.e. I/O corruption while for example messing with JNI? I suppose there is something wrong with the system, but where might it be? Thank you. EDIT: adding snippets as requested sample snippet (a) - using Runtime.exec () (adapted) Properties envVars = new Properties(); Process p = r.exec("/bin/env"); envVars.load(p.getInputStream()); Stack trace (a): java.io.IOException: Value out of range (errno:4034) at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:194) at java.lang.UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:221) at java.io.BufferedInputStream.read1(BufferedInputStream.java:254) at java.io.BufferedInputStream.read(BufferedInputStream.java:313) at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:411) at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:453) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:183) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.readLine(BufferedReader.java:299) at java.io.BufferedReader.readLine(BufferedReader.java:362) at util.Environment.getVariables(Environment.java:39) Last line fails, and output gets redirected to console (!). sample snippet (b) - using HttpURLConnection: public WorkerThread (HttpURLConnection conn, String requestData, Logger logger) { this.conn = conn; ... } public void run () { OutputStream out = conn.getOutputStream (); } Stack trace (b): java.net.SocketException: Value out of range (errno:4034) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.Socket.connect(Socket.java:507) at sun.net.NetworkClient.doConnect(NetworkClient.java:155) at sun.net.www.http.HttpClient.openServer(HttpClient.java:365) at sun.net.www.http.HttpClient.openServer(HttpClient.java:477) at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:280) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:337) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:176) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:736) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:162) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:828) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:230) Case (a) can be avoided because it was a workaround for other issues with previous JRE version (!), but same behaviour with sockets is really nasty.

    Read the article

  • Help deciphering exception details from WebRequestCreator when setting ContentType to "application/json"

    - by Stephen Patten
    Hello, This one is real simple, run this Silverlight4 example with the ContentType property commented out and you'll get back a response from from my service in xml. Now uncomment the property and run it and you'll get an exception similar to this one. System.Net.ProtocolViolationException occurred Message=A request with this method cannot have a request body. StackTrace: at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state) at System.Net.Browser.ClientHttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at com.patten.silverlight.ViewModels.WebRequestLiteViewModel.<MakeCall>b__0(IAsyncResult cb) InnerException: What I am trying to accomplish is just pulling down some JSON formatted data from my wcf endpoint. Can this really be this hard, or is it another classic example of just overlooking something simple. Edit: While perusing SO, I noticed similar posts, like this one Why am I getting ProtocolViolationException when trying to use HttpWebRequest? Thank you, Stephen try { Address = "http://stephenpattenconsulting.com/Services/GetFoodDescriptionsLookup(2100)"; // Get the URI Uri httpSite = new Uri(Address); // Create the request object using the Browsers networking stack // HttpWebRequest wreq = (HttpWebRequest)WebRequest.Create(httpSite); // Create the request using the operating system's networking stack HttpWebRequest wreq = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(httpSite); // http://stackoverflow.com/questions/239725/c-webrequest-class-and-headers // These headers have been set, so use the property that has been exposed to change them // wreq.Headers[HttpRequestHeader.ContentType] = "application/json"; //wreq.ContentType = "application/json"; // Issue the async request. // http://timheuer.com/blog/archive/2010/04/23/silverlight-authorization-header-access.aspx wreq.BeginGetResponse((cb) => { HttpWebRequest rq = cb.AsyncState as HttpWebRequest; HttpWebResponse resp = rq.EndGetResponse(cb) as HttpWebResponse; StreamReader rdr = new StreamReader(resp.GetResponseStream()); string result = rdr.ReadToEnd(); Jounce.Framework.JounceHelper.ExecuteOnUI(() => { Result = result; }); rdr.Close(); }, wreq); } catch (WebException ex) { Jounce.Framework.JounceHelper.ExecuteOnUI(() => { Error = ex.Message; }); } catch (Exception ex) { Jounce.Framework.JounceHelper.ExecuteOnUI(() => { Error = ex.Message; }); } EDIT: This is how the WCF 4 end point is configured, primarily 'adapted' from this link http://geekswithblogs.net/michelotti/archive/2010/08/21/restful-wcf-services-with-no-svc-file-and-no-config.aspx [ServiceContract] public interface IRDA { [OperationContract] IList<FoodDescriptionLookup> GetFoodDescriptionsLookup(String id); [OperationContract] FOOD_DES GetFoodDescription(String id); [OperationContract] FOOD_DES InsertFoodDescription(FOOD_DES foodDescription); [OperationContract] FOOD_DES UpdateFoodDescription(String id, FOOD_DES foodDescription); [OperationContract] void DeleteFoodDescription(String id); } // RESTfull service [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RDAService : IRDA { [WebGet(UriTemplate = "FoodDescription({id})")] public FOOD_DES GetFoodDescription(String id) { ... } [AspNetCacheProfile("GetFoodDescriptionsLookup")] [WebGet(UriTemplate = "GetFoodDescriptionsLookup({id})")] public IList<FoodDescriptionLookup> GetFoodDescriptionsLookup(String id) { return rda.GetFoodDescriptionsLookup(id); ; } [WebInvoke(UriTemplate = "FoodDescription", Method = "POST")] public FOOD_DES InsertFoodDescription(FOOD_DES foodDescription) { ... } [WebInvoke(UriTemplate = "FoodDescription({id})", Method = "PUT")] public FOOD_DES UpdateFoodDescription(String id, FOOD_DES foodDescription) { ... } [WebInvoke(UriTemplate = "FoodDescription({id})", Method = "DELETE")] public void DeleteFoodDescription(String id) { ... } } And the portion of my web.config that pertains to WCF <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel>

    Read the article

  • Cases of companies taking IP rights of your own personal projects developed outside company time

    - by GSS
    Hi, I have heard of cases where a developer working for a company is also making his own personal projects in his own time, using his own equipment yet the company he works for tries to claim ownership for the project. I really find this annoying, and bang out of order. It should also be illegal. I am in this position (work for a company and working on my own systems - from small class libraries used to practise what I learn in my exam revision to a large commercial-scale system). While I don't know if the company will try to take ownership, all I know is they say they do not want a conflict of interest. Fair enough, my system is developed in my own time using my own equipment. They also say that work time should be for work only, which it is. Funny thing that as work is so boring, easy and slow that I have plenty of free time, which I wish I could spend on something productive - said system. The problem is, my company does not take hiring technical talent seriously. This is my first job, I am a junior coder (but my status/position doesn't really reflect what I can do), but I am the only developer. Likewise with the guy who controls Windows Server. As the contract does not say anything about taking ownership, I would assume they would. They would try to milk my success (I've made a good impression so I am sure they would). How can this be allowed? Are there any examples of this happening to any fellow Stacker here? It really makes my blood boil. What I find funny is that my company hardly has the expertise and resources to even be able to successfully run a project of my size. What I do at work is an ASP.NET application consisting of five pages, and even then there are flaws in the project. If I told them that they would also have to take responsibility for flaws in the project, then they would think twice! It's exactly because of this I save the best code for myself and at work I write rubbish code full of code smells. The company don't really care about error handling, as long as the business functionality works (ie a scheduled email sends, but there is no error handling). They'd think twice when they see the embarassment and business cost of a YSOD...

    Read the article

  • What is the best python module skeleton code?

    - by user213060
    == Subjective Question Warning == Looking for well supported opinions or supporting evidence. Let us assume that skeleton code can be good. If you disagree with the very concept of module skeleton code then fine, but please refrain from repeating that opinion here. Many python IDE's will start you with a template like: print 'hello world' That's not enough... So here's my skeleton code to get this question started: My Module Skeleton, Short Version: #!/usr/bin/env python """ Module Docstring """ # ## Code goes here. # def test(): """Testing Docstring""" pass if __name__=='__main__': test() and, My Module Skeleton, Long Version: #!/usr/bin/env python # -*- coding: ascii -*- """ Module Docstring Docstrings: http://www.python.org/dev/peps/pep-0257/ """ __author__ = 'Joe Author ([email protected])' __copyright__ = 'Copyright (c) 2009-2010 Joe Author' __license__ = 'New-style BSD' __vcs_id__ = '$Id$' __version__ = '1.2.3' #Versioning: http://www.python.org/dev/peps/pep-0386/ # ## Code goes here. # def test(): """ Testing Docstring""" pass if __name__=='__main__': test() Notes: """ ===MODULE TYPE=== Since the vast majority of my modules are "library" types, I have constructed this example skeleton as such. For modules that act as the main entry for running the full application, you would make changes such as running a main() function instead of the test() function in __main__. ===VERSIONING=== The following practice, specified in PEP8, no longer makes sense: __version__ = '$Revision: 1.2.3 $' for two reasons: (1) Distributed version control systems make it neccessary to include more than just a revision number. E.g. author name and revision number. (2) It's a revision number not a version number. Instead, the __vcs_id__ variable is being adopted. This expands to, for example: __vcs_id__ = '$Id: example.py,v 1.1.1.1 2001/07/21 22:14:04 goodger Exp $' ===VCS DATE=== Likewise, the date variable has been removed: __date__ = '$Date: 2009/01/02 20:19:18 $' ===CHARACTER ENCODING=== If the coding is explicitly specified, then it should be set to the default setting of ascii. This can be modified if necessary (rarely in practice). Defaulting to utf-8 can cause anomalies with editors that have poor unicode support. """ There are a lot of PEPs that put forward coding style recommendations. Am I missing any important best practices? What is the best python module skeleton code? Update Show me any kind of "best" that you prefer. Tell us what metrics you used to qualify "best".

    Read the article

  • What is a good dumbed-down, safe template system for PHP?

    - by Wilhelm
    (Summary: My users need to be able to edit the structure of their dynamically generated web pages without being able to do any damage.) Greetings, ladies and gentlemen. I am currently working on a service where customers from a specific demographic can create a specific type of web site and fill it with their own content. The system is written in PHP. Many of the users of this system wish to edit how their particular web site looks, or, more commonly, have a designer do it for them. Editing the CSS is fine and dandy, but sometimes that's not enough. Sometimes they want to shuffle the entire page structure around by editing the raw HTML of the dynamically created web pages. The templating system used by WordPress is, as far as I can see, perfect for my use. Except for one thing which is critically important. In addition to being able to edit how comments are displayed or where the menu goes, someone editing a template can have that template execute arbitrary PHP code. As the same codebase runs all these different sites, with all content in the same databse, allowing my users to run arbitrary code is clearly out of the question. So what I need, is a dumbed-down, idiot-proof templating system where my users can edit most of the page structure on their own, pulling in the dynamic sections wherever, without being able to even echo 1+1;. Observe the following psuedocode: <!DOCTYPE html> <title><!-- $title --></title> <!-- header() --> <!-- menu() --> <div>Some random custom crap added by the user.</div> <!-- page_content() --> That's the degree of power I'd like to grant my users. They don't need to do their own loops or calculations or anything. Just include my variables and functions and leave the rest to me. I'm sure I'm not the only person on the planet that needs something like this. Do you know of any ready-made templating systems I could use? Thanks in advance for your reply.

    Read the article

  • Efficiency of data structures in C99 (possibly affected by endianness)

    - by Ninefingers
    Hi All, I have a couple of questions that are all inter-related. Basically, in the algorithm I am implementing a word w is defined as four bytes, so it can be contained whole in a uint32_t. However, during the operation of the algorithm I often need to access the various parts of the word. Now, I can do this in two ways: uint32_t w = 0x11223344; uint8_t a = (w & 0xff000000) >> 24; uint8_t b = (w & 0x00ff0000) >> 16; uint8_t b = (w & 0x0000ff00) >> 8; uint8_t d = (w & 0x000000ff); However, part of me thinks that isn't particularly efficient. I thought a better way would be to use union representation like so: typedef union { struct { uint8_t d; uint8_t c; uint8_t b; uint8_t a; }; uint32_t n; } word32; Using this method I can assign word32 w = 0x11223344; then I can access the various parts as I require (w.a=11 in little endian). However, at this stage I come up against endianness issues, namely, in big endian systems my struct is defined incorrectly so I need to re-order the word prior to it being passed in. This I can do without too much difficulty. My question is, then, is the first part (various bitwise ands and shifts) efficient compared to the implementation using a union? Is there any difference between the two generally? Which way should I go on a modern, x86_64 processor? Is endianness just a red herring here? I could inspect the assembly output of course, but my knowledge of compilers is not brilliant. I would have thought a union would be more efficient as it would essentially convert to memory offsets, like so: mov eax, [r9+8] Would a compiler realise that is what happening in the bit-shift case above? If it matters, I'm using C99, specifically my compiler is clang (llvm). Thanks in advance.

    Read the article

  • Is my understanding of "select distinct" correct?

    - by paxdiablo
    We recently discovered a performance problem with one of our systems and I think I have the fix but I'm not certain my understanding is correct. In simplest form, we have a table blah into which we accumulate various values based on a key field. The basic form is: recdate date rectime time system varchar(20) count integer accum1 integer accum2 integer There are a lot more accumulators than that but they're all of the same form. The primary key is made up of recdate, rectime and system. As values are collected to the table, the count for a given recdate/rectime/system is incremented and the values for that key are added to the accumulators. That means the averages can be obtained by using accumN / count. Now we also have a view over that table specified as follows: create view blah_v ( recdate, rectime, system, count, accum1, accum2 ) as select distinct recdate, rectime, system, count, value (case when count > 0 then accum1 / count end, 0), value (case when count > 0 then accum2 / count end, 0) from blah; In other words, the view gives us the average value of the accumulators rather than the sums. It also makes sure we don't get a divide-by-zero in those cases where the count is zero (these records do exist and we are not allowed to remove them so don't bother telling me they're rubbish - you're preaching to the choir). We've noticed that the time difference between doing: select distinct recdate from XX varies greatly depending on whether we use the table or the view. I'm talking about the difference being 1 second for the table and 27 seconds for the view (with 100K rows). We actually tracked it back to the select distinct. What seems to be happening is that the DBMS is actually loading all the rows in and sorting them so as to remove duplicates. That's fair enough, it's what we stupidly told it to do. But I'm pretty sure the fact that the view includes every component of the primary key means that it's impossible to have duplicates anyway. We've validated the problem since, if we create another view without the distinct, it performs at the same speed as the underlying table. I just wanted to confirm my understanding that a select distinct can not have duplicates if it includes all the primary key components. If that's so, then we can simply change the view appropriately.

    Read the article

  • Revision control for writing programming lessons

    - by Dietrich Epp
    I'd like to write a series programming lessons that guide programmers to build a certain kind of program. After each lesson, I'd like to provide sample code that implements what that lesson covered, and the next lesson would use that code as a starting point. Right now I'm using Git to keep track of the code from lesson to lesson. Each lesson has its own branch. lesson1: A--B--C \ lesson2: D--E--F \ lesson3: G--H--I However, suppose that now I want to make it easier on the Windows programmers using my lessons, so I add a Visual Studio project to lesson 1 and then merge it into lessons 2 and 3. lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K \ \ lesson3: G--H--I--L And then someone points out a bug in lesson 2 that causes crashes on certain systems. (This diagram is where I am right now, and I'm having doubts about continuing along this path.) lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K--M \ \ \ lesson3: G--H--I--L--N Here are the problems I imagine having: If I had many lessons, and I fix something in lesson 1, am I going to have to spend fifteen minutes or more just merging that one simple change? I know I'll probably have to test all of those lessons again, but I can put that off. When I make a bunch of changes to various lessons on one computer, how do I pull all of the branches at the same time? If I decide to publish these lessons, I'd like a way to tag all of the branches to correspond with what I publish. I figure I'll just need to tag each branch separately, but it would be nice if there were a better way. When I look at the history, I imagine becoming terribly confused about what I've done. Compare the above diagram to a hypothetical diagram below, where I use rebase instead of merge (and rebase has its own problems): lesson1: A--B--C--J \ lesson2: D2--E2--F2--M \ lesson3: G2--H2--I2 Do any of you have experience working with a project like this? Should I consider using a different VCS, such as Darcs? (Note: it would be a real pain to use centralized VCS, so don't suggest one of those unless the benefits are clear.) Should I consider writing plugins or extra tools for a VCS (such as a "meta tag" which tags several branches)?

    Read the article

  • No norwegian characters in LaTeX

    - by DreamCodeR
    Hi, I have translated a document from English to Norwegian in the LaTeX format, and while using norwegian special characters, I get an error using \usepackage[utf8x]{inputenc} to try and display the norwegian (scandinavian) special characters in PostScript/PDF/DVI format, saying Package utf8x Error: MalformedUTF-8sequence. So while that didn't work, I tried out another possible solution: \usepackage{ucs} \usepackage[norsk]babel And when I tried to save that in Emacs I get this message: These default coding systems were tried to encode text in the buffer `lol.tex': (utf-8-unix (905 . 4194277) (916 . 4194245) (945 . 4194278) (950 . 4194277) (954 . 4194296) (990 . 4194277) (1010 . 4194277) (1013 . 4194278) (1051 . 4194277) (1078 . 4194296) (1105 . 4194296)) However, each of them encountered characters it couldn't encode: utf-8-unix cannot encode these: \345 \305 \346 \345 \370 \345 \345 \346 \345 \370 ... Thanks to Emacs I have the possibility to check out the properties of those characters and the first one tells me: character: \345 (4194277, #o17777745, #x3fffe5) preferred charset: eight-bit (Raw bytes 128-255) code point: 0xE5 syntax: w which means: word buffer code: #xE5 file code: not encodable by coding system utf-8-unix display: not encodable for terminal Which doesn't tell me much. When I try to build this with texi2dvi --dvipdf filename.text I get a perfectly fine PDF, all without the special norwegian characters. When I am about to save Emacs also ask me: "Select coding system (default raw-text):" And I type in utf-8 to choose its coding system. I have also tried to choose default raw-text to see if I get some different result. But nothing. At last I tried \lstset{inputencoding=utf8x, extendedchars=\true} ... a code I came over while trying to google the solution to this problem. Which gives me this error: Undefined control sequence. So basically, I have tried every encoding option I have been able to find and nothing works. I am desperately trying to make this work since the norwegian translation must be published before the deadline. As an additional information I may add that I found out later on that I only had the en_US.UTF-8 in my locale, so I added nb_NO.UTF-8 and nb_NO.ISO-8859-15 and ran locale-gen + reboot without any changes. I hope I provided enough information to get some assistance, the characters in question is æ ø å.

    Read the article

  • How to merge a "branch" that isn't really a branch (wasn't created by an svn copy)

    - by MatrixFrog
    I'm working on a team with lots of people who are pretty unfamiliar with the concepts of version control systems, and are just kind of doing whatever seems to work, by trial and error. Someone created a "branch" from the trunk that is not ancestrally related to the trunk. My guess is it went something like this: They created a folder in branches. They checked out all the code from the trunk to somewhere on their desktop. They added all that code to the newly created folder as though it was a bunch of brand new files. So the repository isn't aware that all that code is actually just a copy of the trunk. When I look at the history of that branch in TortoiseSVN, and uncheck the "Stop on copy/rename" box, there is no revision that has the trunk (or any other path) under the "Copy from path" column. Then they made lots of changes on their "branch". Meanwhile, others were making lots of changes on the trunk. We tried to do a merge and of course it doesn't work. Because, the trunk and the fake branch are not ancestrally related. I can see only two ways to resolve this: Go through the logs on the "branch", look at every change that was made, and manually apply each change to the trunk. Go through the logs on the trunk, look at every change that was made between revision 540 (when the "branch" was created) and HEAD, and manually apply each change to the "branch". This involves 7 revisions one way or 11 revisions the other way, so neither one is really that terrible. But is there any way to cause the repository to "realize" that the branch really IS ancestrally related even though it was created incorrectly, so that we can take advantage of the built-in merging functionality in Eclipse/TortoiseSVN? (You may be wondering: Why did your company hire these people and allow them to access the SVN repository without making sure they knew how to use it properly first?! We didn't -- this is a school assignment, which is a collaboration between two different classes -- the ones in the lower class were given a very quick hand-wavey "overview" of SVN which didn't really teach them anything. I've asked everyone in the group to please PLEASE read the svn book, and I'll make sure we (the slightly more experienced half of the team) keep a close eye on the repository to ensure this doesn't happen again.)

    Read the article

  • Can someone explain the "use-cases" for the default munin graphs?

    - by exhuma
    When installing munin, it activates a default set of plugins (at least on ubuntu). Alternatively, you can simply run munin-node-configure to figure out which plugins are supported on your system. Most of these plugins plot straight-forward data. My question is not to explain the nature of the data (well... maybe for some) but what is it that you look for in these graphs? It is easy to install munin and see fancy graphs. But having the graphs and not being able to "read" them renders them totally useless. I am going to list standard plugins which are enabled by default on my system. So it's going to be a long list. For completeness, I am also going to list plugins which I think to understand and give a short explanation as to what I think it's used for. Pleas correct if I am wrong with any of them. So let me split this questions in three parts: Plugins where I don't even understand the data Plugins where I understand the data but don't know what I should look out for Plugins which I think to understand Plugins where I don't even understand the data These may contain questions that are not necessarily aimed at munin alone. Not understanding the data usually mean a gap in fundamental knowledge on operating systems/hardware.... ;) Feel free to respond with a "giyf" answer. These are plugins where I can only guess what's going on... I hardly want to look at these "guessing"... Disk IOs per device (IOs/second)What's an IO. I know it stands for input/output. But that's as far as it goes. Disk latency per device (Average IO wait)Not a clue what an "IO wait" is... IO Service TimeThis one is a huge mess, and it's near impossible to see something in the graph at all. Plugins where I understand the data but don't know what I should look out for IOStat (blocks/second read/written)I assume, the thing to look out for in here are spikes? Which would mean that the device is in heavy use? Available entropy (bytes)I assume that this is important for random number generation? Why would I graph this? So far the value has always been near constant. VMStat (running/I/O sleep processes)What's the difference between this one and the "processes" graph? Both show running/sleeping processes, whereas the "Processes" graph seems to have more details. Disk throughput per device (bytes/second read/written) What's thedifference between this one and the "IOStat" graph? inode table usageWhat should I look for in this graph? Plugins which I think to understand I'll be guessing some things here... correct me if I am wrong. Disk usage in percent (percent)How much disk space is used/remaining. As this is approaching 100%, you should consider cleaning up or extend the partition. This is extremely important for the root partition. Firewall Throughput (packets/second)The number of packets passing through the firewall. If this is spiking for a longer period of time, it could be a sign of a DOS attack (or we are simply recieving a large file). It can also give you an idea about your firewall performance. If it's levelling out and you need more "power" you should consider load balancing. If it's levelling out and see a correlation with your CPU load, it could also mean that your hardware is not fast enough. Correlations with disk usage could point to excessive LOG targets in you FW config. eth0 errors (packets in/out)Network errors. If this value is increasing, it could be a sign of faulty hardware. eth0 traffic (bits/second in/out)Raw network traffic. This should correlate with Firewall throughput. number of threadsAn ever-increasing value might point to a process not properly closing threads. Investigate! processesBreakdown of active processes (including sleeping). A quick spike in here might point to a fork-bomb. A slowly, but ever-increasing value might point to an application spawning sub-processes but not properly closing them. Investigate using ps faux. process priorityThis shows the distribution of process priorities. Having only high-priority processes is not of much use. Consider de-prioritizing some. cpu usageFairly straight-forward. If this is spiking, you may have an attack going on, or a process is hogging the CPU. Idf it's slowly increasing and approaching max in normal operations, you should consider upgrading your hardware (or load-balancing). file table usageNumber of actively open files. If this is reaching max, you may have a process opening, but not properly releasing files. load averageShows an summarized value for the system load. Should correlate with CPU usage. Increasing values can come from a number of sources. Look for correlations with other graphs. memory usageA graphical representation of you memory. As long as you have a lot of unused+cache+buffers you are fine. swap in/outShows the activity on your swap partition. This should always be 0. If you see activity on this, you should add more memory to your machine!

    Read the article

  • Motherboard/PSU crippling USB and Sata

    - by celebdor
    I very recently bought a new desktop computer. The motherboard is: Z77MX-D3H and the power supply is ocz zs series 550w. The issue I have is that once I boot to the operating system (I have tried with fedora and Ubuntu with kernels 2.6.38 - 3.4.0), my hard drive (2.5" Magnetic) occasionally makes a power switch noise and it resets. Needless to say, when this drive is the OS drive, the OS crashes. I also have a SSD that works fine with the same OS configurations, but if I have the magnetic hard drive attached as second drive, it works erratically and the reconnects result in corrupted data. I also noticed that whenever I plug an external hard drive USB2.0 or USB3.0 to the computer the issue with the reconnects is even worse: [ 52.198441] sd 7:0:0:0: [sdc] Spinning up disk... [ 57.955811] usb 4-3: USB disconnect, device number 3 [ 58.023687] .ready [ 58.023914] sd 7:0:0:0: [sdc] READ CAPACITY(16) failed [ 58.023919] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.023932] sd 7:0:0:0: [sdc] Sense not available. [ 58.024061] sd 7:0:0:0: [sdc] READ CAPACITY failed [ 58.024063] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024064] sd 7:0:0:0: [sdc] Sense not available. [ 58.024099] sd 7:0:0:0: [sdc] Write Protect is off [ 58.024101] sd 7:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 58.024135] sd 7:0:0:0: [sdc] Asking for cache data failed [ 58.024137] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 58.024400] sd 7:0:0:0: [sdc] READ CAPACITY(16) failed [ 58.024402] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024405] sd 7:0:0:0: [sdc] Sense not available. [ 58.024448] sd 7:0:0:0: [sdc] READ CAPACITY failed [ 58.024450] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024451] sd 7:0:0:0: [sdc] Sense not available. [ 58.024469] sd 7:0:0:0: [sdc] Asking for cache data failed [ 58.024471] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 58.024472] sd 7:0:0:0: [sdc] Attached SCSI disk [ 58.407725] usb 4-3: new SuperSpeed USB device number 4 using xhci_hcd [ 58.424921] scsi8 : usb-storage 4-3:1.0 [ 59.424185] scsi 8:0:0:0: Direct-Access WD My Passport 0740 1003 PQ: 0 ANSI: 6 [ 59.424406] scsi 8:0:0:1: Enclosure WD SES Device 1003 PQ: 0 ANSI: 6 [ 59.425098] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 59.425176] ses 8:0:0:1: Attached Enclosure device [ 59.425248] ses 8:0:0:1: Attached scsi generic sg3 type 13 [ 61.845836] sd 8:0:0:0: [sdc] 976707584 512-byte logical blocks: (500 GB/465 GiB) [ 61.845838] sd 8:0:0:0: [sdc] 4096-byte physical blocks [ 61.846336] sd 8:0:0:0: [sdc] Write Protect is off [ 61.846338] sd 8:0:0:0: [sdc] Mode Sense: 47 00 10 08 [ 61.846718] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.846720] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.848105] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.848106] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.857147] sdc: sdc1 [ 61.858915] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.858916] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.858918] sd 8:0:0:0: [sdc] Attached SCSI disk [ 69.875809] usb 4-3: USB disconnect, device number 4 [ 70.275816] usb 4-3: new SuperSpeed USB device number 5 using xhci_hcd [ 70.293063] scsi9 : usb-storage 4-3:1.0 [ 71.292257] scsi 9:0:0:0: Direct-Access WD My Passport 0740 1003 PQ: 0 ANSI: 6 [ 71.292505] scsi 9:0:0:1: Enclosure WD SES Device 1003 PQ: 0 ANSI: 6 [ 71.293527] sd 9:0:0:0: Attached scsi generic sg2 type 0 [ 71.293668] ses 9:0:0:1: Attached Enclosure device [ 71.293758] ses 9:0:0:1: Attached scsi generic sg3 type 13 [ 73.323804] usb 4-3: USB disconnect, device number 5 [ 101.868078] ses 9:0:0:1: Device offlined - not ready after error recovery [ 101.868124] ses 9:0:0:1: Failed to get diagnostic page 0x50000 [ 101.868131] ses 9:0:0:1: Failed to bind enclosure -19 [ 101.868288] sd 9:0:0:0: [sdc] READ CAPACITY(16) failed [ 101.868292] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868296] sd 9:0:0:0: [sdc] Sense not available. [ 101.868428] sd 9:0:0:0: [sdc] READ CAPACITY failed [ 101.868434] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868439] sd 9:0:0:0: [sdc] Sense not available. [ 101.868468] sd 9:0:0:0: [sdc] Write Protect is off [ 101.868473] sd 9:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 101.868580] sd 9:0:0:0: [sdc] Asking for cache data failed [ 101.868584] sd 9:0:0:0: [sdc] Assuming drive cache: write through [ 101.868845] sd 9:0:0:0: [sdc] READ CAPACITY(16) failed [ 101.868849] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868854] sd 9:0:0:0: [sdc] Sense not available. [ 101.868894] sd 9:0:0:0: [sdc] READ CAPACITY failed [ 101.868898] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868903] sd 9:0:0:0: [sdc] Sense not available. [ 101.868961] sd 9:0:0:0: [sdc] Asking for cache data failed [ 101.868966] sd 9:0:0:0: [sdc] Assuming drive cache: write through [ 101.868969] sd 9:0:0:0: [sdc] Attached SCSI disk Now, if I plug the same drive to the powered usb 2.0 hub of my monitor, the issue is not reproduced (at least on a 20h long operation). Also the issue of the usb reconnects is less frequent if the hard drive is plugged before I switch on the computer. Does anybody have some advice as to what I could do? Which is the faulty part/s that I should replace? As for me, I really don't know if to point my finger to the PSU or the Motherboard (I have updated to the latest firmware and checked the BIOS settings several times). EDIT: The reconnects are happening both in the Sata connected drives and the USBX connected drives.

    Read the article

  • How to merge an improperly created "branch" that isn't really a branch (wasn't created by an svn cop

    - by MatrixFrog
    I'm working on a team with lots of people who are pretty unfamiliar with the concepts of version control systems, and are just kind of doing whatever seems to work, by trial and error. Someone created a "branch" from the trunk that is not ancestrally related to the trunk. My guess is it went something like this: They created a folder in branches. They checked out all the code from the trunk to somewhere on their desktop. They added all that code to the newly created folder as though it was a bunch of brand new files. So the repository isn't aware that all that code is actually just a copy of the trunk. When I look at the history of that branch in TortoiseSVN, and uncheck the "Stop on copy/rename" box, there is no revision that has the trunk (or any other path) under the "Copy from path" column. Then they made lots of changes on their "branch". Meanwhile, others were making lots of changes on the trunk. We tried to do a merge and of course it doesn't work. Because, the trunk and the fake branch are not ancestrally related. I can see only two ways to resolve this: Go through the logs on the "branch", look at every change that was made, and manually apply each change to the trunk. Go through the logs on the trunk, look at every change that was made between revision 540 (when the "branch" was created) and HEAD, and manually apply each change to the "branch". This involves 7 revisions one way or 11 revisions the other way, so neither one is really that terrible. But is there any way to cause the repository to "realize" that the branch really IS ancestrally related even though it was created incorrectly, so that we can take advantage of the built-in merging functionality in Eclipse/TortoiseSVN? (You may be wondering: Why did your company hire these people and allow them to access the SVN repository without making sure they knew how to use it properly first?! We didn't -- this is a school assignment, which is a collaboration between two different classes -- the ones in the lower class were given a very quick hand-wavey "overview" of SVN which didn't really teach them anything. I've asked everyone in the group to please PLEASE read the svn book, and I'll make sure we (the slightly more experienced half of the team) keep a close eye on the repository to ensure this doesn't happen again.)

    Read the article

  • Segfault (possibly due to casting)

    - by BSchlinker
    I don't normally go to stackoverflow for sigsegv errors, but I have done all I can with my debugger at the moment. The segmentation fault error is thrown following the completion of the function. Any ideas what I'm overlooking? I suspect that it is due to the casting of the sockaddr to the sockaddr_in, but I am unable to find any mistakes there. (Removing that line gets rid of the seg fault -- but I know that may not be the root cause here). // basic setup int sockfd; char str[INET_ADDRSTRLEN]; sockaddr* sa; socklen_t* sl; struct addrinfo hints, *servinfo, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; // return string string foundIP; // setup the struct for a connection with selected IP if ((rv = getaddrinfo("4.2.2.1", NULL, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return "1"; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return "2"; } // connect the UDP socket to something connect(sockfd, p->ai_addr, p->ai_addrlen); // we need to connect to get the systems local IP // get information on the local IP from the socket we created getsockname(sockfd, sa, sl); // convert the sockaddr to a sockaddr_in via casting struct sockaddr_in *sa_ipv4 = (struct sockaddr_in *)sa; // get the IP from the sockaddr_in and print it inet_ntop(AF_INET, &(sa_ipv4->sin_addr), str, INET_ADDRSTRLEN); printf("%s\n", str); // return the IP return foundIP; }

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • Microsoft Access and Java JDBC-ODBC Error

    - by user1638362
    Trying to insert some values in a Microsoft access database using java. I can an error however, java.sql.SQLException: [Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application Exception in thread "main" java.lang.NullPointerException To create the data source im using SysWoW64 odbcad32 and adding it the datasource to system DNS. I say this as i have seen else where there are problems which occur with 64bit systems. However it still doesn't work for me. Microsoft Office 32bit. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.sql.Statement; public class RMIAuctionHouseJDBC { /** * @param args */ public static void main(String[] args) { String theItem = "Car"; String theClient="KHAN"; String theMessage="1001"; Connection conn =null; // Create connection object try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); System.out.println("Driver Found"); } catch(Exception e) { System.out.println("Driver Not Found"); System.err.println(e); } // connecting to database try{ String database ="jdbc:odbc:Driver={Microsoft Access Driver (*.accdb)};DBQ=AuctionHouseDatabase.accdb;"; conn = DriverManager.getConnection(database,"",""); System.out.println("Conn Found"); } catch(SQLException se) { System.out.println("Conn Not Found"); System.err.println(se); } // Create select statement and execute it try{ /*String insertSQL = "INSERT INTO AuctionHouse VALUES ( " +"'" +theItem+"', " +"'" +theClient+"', " +"'" +theMessage+"')"; */ Statement stmt = conn.createStatement(); String insertSQL = "Insert into AuctionHouse VALUES ('Item','Name','Price')"; stmt.executeUpdate(insertSQL); // Retrieve the results conn.close(); } catch(SQLException se) { System.out.println("SqlStatment Not Found"); System.err.println(se); } } } java.sql.SQLException: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.SQLDriverConnect(Unknown Source) at sun.jdbc.odbc.JdbcOdbcConnection.initialize(Unknown Source)

    Read the article

  • Middleware with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement middleware (driver) for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our servers (or a pda/laptop in used in field). Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Example: Device connects to our Linux server. New middleware instance is created (on the server) which announces itself to one of the running services (details are not important). The service is responsible for making sure that device's time is synchronized. So it asks the middleware what is the device's time, driver translates it to device language (protocol) and sends the message, device responses and driver again translates it for the service. This might seem as a bit overkill for such a simple request but imagine there are more complex requests which the driver must translate, also there are several versions of the device which use different protocol, etc. but would use the same time sync service. The goal is to abstract the devices through the drivers to be able to use the same service to communicate with them. Another example: we find out that the remote communications with the device are down. So we send somebody out with PDA, he connects to the device using USB cable. Starts up the application which has the same functionality as the timesync service. Again middleware instance is created (on the PDA) to translate communication between application and the device this time only using USB/serial media not TCP/IP as in previous example. I hope it makes more sense now :) Cheers, Tom

    Read the article

  • What facets have I missed for creating a 3 person guerilla dev team?

    - by Penguinix
    Sorry for the Windows developers out there, this solution is for Macs only. This set of applications accounts for: Usability Testing, Screen Capture (Video and Still), Version Control, Task Lists, Bug Tracking, a Developer IDE, a Web Server, A Blog, Shared Doc Editing on the Web, Team and individual Chat, Email, Databases and Continuous Integration. This does assume your team members provide their own machines, and one person has a spare old computer to be the Source Repository and Web Server. All for under $200 bucks. Usability Silverback Licenses = 3 x $49.95 "Spontaneous, unobtrusive usability testing software for designers and developers." Source Control Server and Clients (multiple options) Subversion = Free Subversion is an open source version control system. Versions (Currently in Beta) = Free Versions provides a pleasant work with Subversion on your Mac. Diffly = Free "Diffly is a tool for exploring Subversion working copies. It shows all files with changes and, clicking on a file, shows a highlighted view of the changes for that file. When you are ready to commit Diffly makes it easy to select the files you want to check-in and assemble a useful commit message." Bug/Feature/Defect Tracking (multiple options) Bugzilla = Free Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect Tracking Systems allow individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial defect-tracking software vendors charge enormous licensing fees. Trac = Free Trac is an enhanced wiki and issue tracking system for software development projects. Database Server & Clients MySQL = Free CocoaMySQL = Free Web Server Apache = Free Development and Build Tools XCode = Free CruiseControl = Free CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds. Collaboration Tools Writeboard = Free Ta-da List = Free Campfire Chat for 4 users = Free WordPress = Free "WordPress is a state-of-the-art publishing platform with a focus on aesthetics, web standards, and usability. WordPress is both free and priceless at the same time." Gmail = Free "Gmail is a new kind of webmail, built on the idea that email can be more intuitive, efficient, and useful." Screen Capture (Video / Still) Jing = Free "The concept of Jing is the always-ready program that instantly captures and shares images and video…from your computer to anywhere." Lots of great responses: TeamCity [Yo|||] Skype [Eric DeLabar] FogBugz [chakrit] IChatAV and Screen Sharing (built-in to OS) [amrox] Google Docs [amrox]

    Read the article

  • Inference engine to calculate matching set according to internal rules

    - by Zecrates
    I have a set of objects with attributes and a bunch of rules that, when applied to the set of objects, provides a subset of those objects. To make this easier to understand I'll provide a concrete example. My objects are persons and each has three attributes: country of origin, gender and age group (all attributes are discrete). I have a bunch of rules, like "all males from the US", which correspond with subsets of this larger set of objects. I'm looking for either an existing Java "inference engine" or something similar, which will be able to map from the rules to a subset of persons, or advice on how to go about creating my own. I have read up on rule engines, but that term seems to be exclusively used for expert systems that externalize the business rules, and usually doesn't include any advanced form of inferencing. Here are some examples of the more complex scenarios I have to deal with: I need the conjunction of rules. So when presented with both "include all males" and "exclude all US persons in the 10 - 20 age group," I'm only interested in the males outside of the US, and the males within the US that are outside the 10 - 20 age group. Rules may have different priorities (explicitly defined). So a rule saying "exclude all males" will override a rule saying "include all US males." Rules may be conflicting. So I could have both an "include all males" and an "exclude all males" in which case the priorities will have to settle the issue. Rules are symmetric. So "include all males" is equivalent to "exclude all females." Rules (or rather subsets) may have meta rules (explicitly defined) associated with them. These meta rules will have to be applied in any case that the original rule is applied, or if the subset is reached via inferencing. So if a meta rule of "exclude the US" is attached to the rule "include all males", and I provide the engine with the rule "exclude all females," it should be able to inference that the "exclude all females" subset is equivalent to the "include all males" subset and as such apply the "exclude the US" rule additionally. I can in all likelihood live without item 5, but I do need all the other properties mentioned. Both my rules and objects are stored in a database and may be updated at any stage, so I'd need to instantiate the 'inference engine' when needed and destroy it afterward.

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >