Search Results

Search found 16940 results on 678 pages for 'disk drive'.

Page 557/678 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • The Project Location is Not Trusted error in Visual Studio

    - by SLC
    Quite a simple error, and the reason is obvious - I mapped a network drive, and I am opening the solution from it. Visual Studio gives me this error. I tried googling, and to my surprise, couldn't find a fix. I am running Visual Studio 2008. The solutions I found on google say I should run Mscorcfg.msc, but unfortunately, I don't seem to have that file anywhere on my computer. Nor do I seem to have anything in my control panel relating to .NET Framework. I can of course, run .NET applications fine, so the framework exists. Another solution suggested running caspol.exe, although this is .NET 2, which I also tried to no avail. Any ideas? I should add that I am trying to add the path to whatever trusted list there is.

    Read the article

  • gdi+ removable device IO Safe problem

    - by sxingfeng
    I am using gdi+ for image format checking. It is really surprising that gdi+ Image img(path); does not throw exception when a device is removed. for example, I am checking a list of image files on a removable device. I plug the disk off, Then My Application will crashed. How can I avoid such problem? Many Thanks! I am using c++ gdi+ windows ,many thanks.

    Read the article

  • How mod_cache working with "must-revalidate" and "max-age"?

    - by Dmitriy Sosunov
    Quick question before I will explain my flow: ?an mod_cache perform revalidate with if-none-match only if max-age is expired in case if it configured in reverse proxy mode? My goal is to reduce a number of revalidation requests to our the origin server. For instance: The first request goes to the origin server and then mod_cache save a response in to the cache according to header cache-control: max-age. And only when max-age is expired then mod_cache will revalidate with if-none-match. Currently, mod_cache revalidate each request, regardless that max-age is defined or not. My configuration of Apache 2.4.3 (Windows), on linux I see the same behavior that I will show below. ServerName proxy.lo ProxyRequests Off ProxyPreserveHost Off Header set Vary "Accept, Content-Type, Content-Encoding, Accept-Language" RequestHeader set X-Forwarded-Proto "http" # modify header for user agent's Header set Cache-Control "private, no-cache, no-store, no-transform" CacheQuickHandler off CacheDefaultExpire 300 # the origin server do not provide last-modified CacheIgnoreNoLastMod On CacheIgnoreCacheControl On # the origin server define cache-control: private, no-store only for user agents # Therefore, I would like ignore those headers on the proxy server. CacheStorePrivate On CacheStoreNoStore On CacheEnable disk / CacheRoot "C:/Apache.Cache" CacheDirLevels 5 CacheDirLength 4 CacheMinExpire 15 CacheDetailHeader on CacheHeader on KeepAlive Off ProxyPass / http://origin.lo/ ProxyPassReverse / http://origin.lo/ Also, I have turned on debug log level to see how mod_cache handles a content for caching: I provided this to show that mod_proxy always decides that a content isn't fresh. Why?I provided this to show that mod_proxy always decide that a content isn't fresh. Why? max-age was provided (see below). [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(624): [client 192.168.1.100:63741] AH00698: cache: Key for entity /testpage?(null) is http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(569): [client 192.168.1.100:63741] AH00709: Recalled cached URL info header http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(865): [client 192.168.1.100:63741] AH00720: Recalled headers for URL http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(320): [client 192.168.1.100:63741] AH00695: Cached response for /testpage isn't fresh. Adding/replacing conditional request headers. [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(414): [client 192.168.1.100:63741] AH00757: Adding CACHE_SAVE filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(448): [client 192.168.1.100:63741] AH00759: Adding CACHE_REMOVE_URL filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] mod_proxy.c(1068): [client 192.168.1.100:63741] AH01143: Running scheme http handler (attempt 0) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1976): AH00942: HTTP: has acquired connection for (origin.lo) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2029): [client 192.168.1.100:63741] AH00944: connecting http://origin.lo/testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2151): [client 192.168.1.100:63741] AH00947: connected /testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2554): AH00962: HTTP: connection complete to 192.168.1.100:80 (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1991): AH00943: http: has released connection for (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [headers:debug] [pid 6492:tid 1400] mod_headers.c(800): AH01502: headers: ap_headers_output_filter() [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1190): [client 192.168.1.100:63741] AH00769: cache: Caching url: /testpage [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1196): [client 192.168.1.100:63741] AH00770: cache: Removing CACHE_REMOVE_URL filter. [Sun Nov 04 11:58:42.904890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(1318): [client 192.168.1.100:63741] AH00737: commit_entity: Headers and body for URL http://proxy.lo/testpage? cached. The first request to the origin server without mod_proxy to http://origin.lo/ GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The first response from the origin without mod_proxy HTTP/1.1 200 OK Cache-Control: must-revalidate, proxy-revalidate, max-age=30 Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:11:01 GMT Content-Length: 1877 So, I assumed that revalidation must be occur only in 30 seconds after the success response. Is't right? Let's check it:) Within 30 sec, the Google Chrome didn't perform any requests to the origin server to revalidate a request and has return the response from local cache. When max-age is expired, the Google Chrome perform a request to revalidate: GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive Cache-Control: max-age=0 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/xml If-None-Match: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and response: HTTP/1.1 304 Not Modified Cache-Control: must-revalidate, proxy-revalidate, max-age=30 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:16:20 GMT As you can see, all works as expected. User agent revalidates request only when max-age is expired. Let's now try perform the folling flow though mod_proxy (see configuration above). The first request: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the response was: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:23:36 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: MISS from proxy.lo X-Cache-Detail: "cache miss: attempting entity save" from proxy.lo Connection: close Ok, let's see to the disk cache and try to see how request and response was stored. (I cut binary data) http://proxy.lo/testpage? Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Date: Sun, 04 Nov 2012 10:27:15 GMT Content-Length: 1932 Vary: Accept, Content-Type, Content-Encoding, Accept-Language Host: proxy.lo User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 X-Forwarded-Proto: http Cache-Control: max-age=300, must-revalidate X-Forwarded-For: 192.168.1.100 X-Forwarded-Host: proxy.lo X-Forwarded-Server: origin.lo Ok, what we see? We see that the first request was performed with max-age=300 & must-revalidate Ok, looks good, as for me, lets perform the next call: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the second response from mod_proxy: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:31:58 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: REVALIDATE from proxy.lo X-Cache-Detail: "conditional cache hit: entity refreshed" from proxy.lo Connection: close Content-Type: application/json; charset=utf-8 SO, MY QUESTION IS: WHY mod_proxy perform revalidation on each request regardless that max-age is defined? N.B. Apache 2.4.3 Thanks, I would be grateful for any help.

    Read the article

  • Help to choose Alfresco or Nuxeo for DMS

    - by user267073
    Hi! I have requirement to develop DMS(Document Managemen System) with some initial requirements: 1.If possible DMS should be open source 2.Initially DMS should support up to 500 users 3.System should be scalable in sence od users or content 4.Docuemtns/Content should be stored on a file system 5.Document should be able to be marked for later destruction 6.Mandatory to have workflow capabilities 7.Mandatory to have version control capability 8.Nice to have SSO(Single Sign On) with Liferay portal 9.Nice to have posibility to expose some of funccionality via portlets in Liferay 10.Document management should be done via the web interface 11.Nice to have shared drive capability 12.Nice to have events and notifications about add/change content At the moment I am in doubth to choose between Alfresco and Nuxeo. I will appreciate any help to choose between them. Thanks in advance

    Read the article

  • Selenium Grid not always using all of its registered RC's, why?

    - by BenA
    My Selenium Grid setup is as follows (all VMs) VM1 - Windows 7 x64 - Grid Hub + 2 RCs registering the default *firefox environment VM2 - Windows XP x32 - 2 RCs registering the default *firefox environment VM3 - Windows XP x32 - 2 RCs registering the default *firefox environment I'm happily using Mbunit and Gallio to drive the Grid, but my problem is that sometimes the Grid hub will stop passing executions over to 1 or more of the RCs, despite their showing available on the hub console. They seem to be happily maintaining their heartbeat back to the hub, but they're never asked to do any more work. This is after they had been executing tests earlier in the test run. Does anybody have any ideas why this should happen? In every case I've observed this behaviour, the last test an RC executed, before it then seemingly gets ignored by the hub, passed, and the session was successfully closed. Interestingly, whenever it happens to more than 1 of the RCs, its always (so far) been the pair that are running on the same VM. Yet they're managing to maintain their heartbeat, so it isn't a network connectivity problem. Any help would be greatly appreciated!

    Read the article

  • iphone/ipad orientation handling

    - by Mark
    This is more of a general question for people to provide me guidance on, basically Im learning iPad/iPhone development and have finally come across the multi-orientation support question. I have looked up a fair amount of doco, and my book "Beginning iPhone 3 Development" has a nice chapter on it. But my question is this, if I was to programatically change my controls (or even use different views for each orientation) how on earth to people maintain their code base? I can just imagine so many issues with spaghetti code/thousands of "if" checks all over the place, that it would drive me nuts to make one small change to the UI arrangement. Does anyone have experience handling this issue? What is a nice way to control it? Thanks a lot Mark

    Read the article

  • Can I use some combination of HttpHandler and HttpModule to create an asynchronous file upload?

    - by scottm
    I am trying to write a simple asynchronous file upload control for ASP.Net. I've tried a few implementations out there (AjaxControl Toolkit Async Upload, Telerik RadAsyncUpload, AjaxUploader, Uploadify, etc.), but they all leave me wanting something more. I'd rather not use a Flash component, a simple throbber would be OK. Some don't have client side call backs. I figure you can make an http module that checks the request form's enctype and files collection on BeginRequest and save those files to disk. Then somehow use an httpHandler to poll the module's status but I can't quite figure out how to put it all together asynchronously. Can you help me arrange the pieces?

    Read the article

  • Handling pointer while updating a key value in rpgle

    - by abhinav singh
    my code goes like this femp uf e k disk dvar1 s 5p 0 c *loval setll emp c read emp c dow not %eof(emp) C eval ecode = ecode + 10 c eval var1=ecode c update recemp c var1 setgt emp c read emp c enddo c eval *inlr=*on here is a file named emp with record format name recemp with ecode as the key ...now when i am reading the file and then updating the ecode without using setgt ..the pointer is not moving ahead it is updating the same ecode value many time ...now when i use set gt pointer picks the next record but it dint work when two ecode values are same ...else also it will not be working with descending key values...is there any solution so that i can set pointer regardless of the fact whether the values are same or ascending or descending .......thanks

    Read the article

  • Why does Chrome ignore local jQuery cookies?

    - by Nathan Long
    I am using the jQuery Cookie plugin (download and demo and source code with comments) to set and read a cookie. I'm developing the page on my local machine. The following code will successfully set a cookie in FireFox 3, IE 7, and Safari (PC). But if the browser is Google Chrome AND the page is a local file, it does not work. $.cookie("nameofcookie", cookievalue, {path: "/", expires: 30}); What I know: The plugin's demo works with Chrome. If I put my code on a web server (address starting with http://), it works with Chrome. So the cookie fails only for Google Chrome on local files. Possible causes: Google Chrome doesn't accept cookies from web pages on the hard drive (paths like file:///C:/websites/foo.html) Something in the plugin implentation causes Chrome to reject such cookies Can anyone confirm this and identify the root cause?

    Read the article

  • if cookies are disabled, does asp.net store the cookie as a session cookie instead or not?

    - by Erx_VB.NExT.Coder
    basically, if cookeis are disabled on the client, im wondering if this... dim newCookie = New HttpCookie("cookieName", "cookieValue") newCookie.Expires = DateTime.Now.AddDays(1) response.cookies.add(newCookie) notice i set a date, so it should be stored on disk, if cookies are disabled does asp.net automatically store this cookie as a session cookie (which is a cookie that lasts in browser memory until the user closes the browser, if i am not mistaken).... OR does asp.net not add the cookie at all (anywhere) in which case i would have to re-add the cookie to the collection without the date (which stores as a session cookie)... of course, this would require me doing the addition of a cookie twice... perhaps the second time unnecessarily if it is being stored in browsers memory anyway... im just trying not to store it twice as it's just bad code!! any ideas if i need to write another line or not? (which would be)... response.cookies.add(New HttpCookie("cookieName", "cookieValue") ' session cookie in client browser memory thanks guys

    Read the article

  • Manage spreadsheet versioning

    - by oo
    We have a lot of VBA code in spreadsheets and a lot of time people save them to local drives. When we want to upgrade the spreadsheets we push a new version out to a shared drive but dont have any way of enforcing that people dont use the old versions of the spreadsheets. Is there some best practice here to deploy vba spreadsheets so if someone loads an old version it wont open or will ask you to upgrade. It seems like this must be an issue for any custom solution so i would have through MS would have some solution here. Does microsoft have a standard versioning / deployment solution for this or do i need to come up with some home grown solution (spreadsheet pings a database on startup to check version)

    Read the article

  • "Primary Filegroup is Full" in SQL Server 2008 Standard for no apparent reason

    - by Anton Gogolev
    Our database is currently at 64 Gb and one of our apps started to fail with the following error: System.Data.SqlClient.SqlException: Could not allocate space for object 'cnv.LoggedUnpreparedSpos'.'PK_LoggedUnpreparedSpos' in database 'travelgateway' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I double-checked everything: all files in a single filegroup are allowed to autogrow with a reasonable increments (100 Mb for a data file, 10% for a log file), more than 100 Gb of free space is available for the database, tempdb is set to autogrow as well with plenty of free HDD space on its drive. To resolve a problem, I added second file to the filegroup and the error has gone. But I feel uneasy about this whole situation. Where' the problem here, guys?

    Read the article

  • Java program strange behavior, how to fix it ?

    - by Frank
    My notebook has Intel CPU, running Windows Vista. My program looks like this : public class Tool_Lib_Simple { public static void main(String[] args) { System.out.println("123"); } } When I run it, I expect to see : "123", but the output was : "Hi NM : How are you NM ?" which was the old output from two days ago before I changed my program. If I copy this program into another project in Netbean 6.7, it will run correctly and output "123", and if I change the program name from "Tool_Lib_Simple" to something else, it will also output "123", but just not under the name of "Tool_Lib_Simple" in the current project's src directory, I've deleted the "build" directory and did re-compile, re-build, it still gives me "Hi NM : How are you NM ?" as a result, seems to me the old version of my program is saved in the hard drive or ram and got stuck there, I've programmed many years, hardly ever encounter this kind of problem, how to fix this ? Frank

    Read the article

  • Limited IList<> implementation ?

    - by Wam
    Hello people, I'm doing some work with stats, and want to refactor the following method : public static blah(float[] array) { //Do something over array, sum it for example. } However, instead of using float[] I'd like to be using some kind of indexed enumerable (to use dynamic loading from disk for very large arrays for example). I had created a simple interface, and wanted to use it. public interface IArray<T> : IEnumerable<T> { T this[int j] { get; set; } int Length { get; } } My problem is : float[] only inherits IList, not IArray, and AFAIK there's no way to change that. I'd like to ditch IArray and only use IList, but my classes would need to implement many methods like Add, RemoveAt although they are fixed size And then my question : how can float[] implement IList whereas it doesn't have these methods ? Any help is welcome. Cheers

    Read the article

  • Our embedded linux system won't recognize a USB Device if it is plugged in before powerup. Suggestions?

    - by Blaine
    We are developing on a small embedded device. This device us a gumstix overo board running OpenEmbedded linux. We have our development almost completely done, and have run into the strangest of bugs that we can't figure out. We have a USB Device (Spectrophotometer) that has a USB2.0 Connection and an external power supply for the light source. Typical behavior is that you plug in the power supply, then the USB connection to a host. When the usb connection is detected by the device, the device boots up and enables the light source and fan. The device is then able to be used by the host system. The problem is that if the device is plugged into the Gumstix before we turn on the Gumstix, the USB Device apparently is not probed by the system (and hence does not turn on). Under a normal situation, when the connection is initialized by plugging in the usb cable, the spectro turns itself on and becomes available to the system (this can be seen via "lsusb" typically). Neither of these things are happening. There is no device detected via "lsusb" and no dmesg errors of any kind that we can see. It is as if the device is not plugged in. The device does show up and work fine if we unplug the USB cable and plug it back in once the system is booted up. It turns on and shows up on the usb bus, and we can access it with our driver. On any other desktop or laptop, it does not matter if the host system is on or off when we plug in the spectrometer. This behavior is what I would consider to be "normal" - that the usb system is probed and initialized at boot time, and the usb devices come online. In other words, our system is fully functional as long as we plug in the usb device after the system is booted up. Unfortunately this isn't possible in our final product - everything comes on at once. Additional Info: 1) We have tried a flash drive attached to the system when the system is turned off. Booting up the system brings the flash drive online, as expected 2) There are no messages regarding the spectro or usb device (using dmesg). "lsusb" only lists the USB hubs / controllers. It is literally as if the device is not present and not plugged in. 3) We have tried a brand new image from gumstix and an older image from last year. Both images have this problem. This problem exists on all 3 gumstix devices we use. Does anyone have any suggestions? From what I can tell it isn't really possible to do a complete "reboot" of the usb system that is a complete emulation of "unplugging" and "replugging" a usb device. I feel like what is happening is that there is no initial probe on the usb bus that would trigger the usb handshaking, but this is somehow specific to the spectro. This seems to be a kernel issue or at least an issue in how the kernel is initializing the usb subsystem. I'm not really sure though. I have tried the gumstix mailing list, but there doesn't seem to be anyone who has seen this issue before. Any advice or suggestions on where to start looking would be fantastic. Thank you! Blaine output etc. $ uname -a Linux overo 2.6.33 #1 Tue Apr 27 08:35:38 PDT 2010 armv7l GNU/Linux When the system is up and running and spectro is plugged in (working as intended), this is lsusb: Bus 001 Device 116: ID 2457:1022 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2457 idProduct 0x1022 bcdDevice 0.02 iManufacturer 1 USB4000 1.01.11 iProduct 2 Ocean Optics USB4000 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 400mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x86 EP 6 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) dmesg output: usb usb1: usb auto-resume hub 1-0:1.0: hub_resume usb usb2: usb auto-resume ehci-omap ehci-omap.0: resume root hub hub 1-0:1.0: state 7 ports 1 chg 0000 evt 0000 hub 2-0:1.0: hub_resume hub 2-0:1.0: state 7 ports 3 chg 0000 evt 0000 hub 1-0:1.0: hub_suspend usb usb1: bus auto-suspend hub 2-0:1.0: hub_suspend usb usb2: bus auto-suspend ehci-omap ehci-omap.0: suspend root hub usb usb2: usb resume ehci-omap ehci-omap.0: resume root hub hub 2-0:1.0: hub_resume ehci-omap ehci-omap.0: GetStatus port 2 status 001803 POWER sig=j CSC CONNECT hub 2-0:1.0: port 2: status 0501 change 0001 hub 2-0:1.0: state 7 ports 3 chg 0004 evt 0000 hub 2-0:1.0: port 2, status 0501, change 0000, 480 Mb/s ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: new high speed USB device using ehci-omap and address 2 ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: default language 0x0409 usb 2-2: udev 2, busnum 2, minor = 129 usb 2-2: New USB device found, idVendor=2457, idProduct=1022 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: Ocean Optics USB4000 usb 2-2: Manufacturer: USB4000 1.01.11 usb 2-2: uevent usb 2-2: usb_probe_device usb 2-2: configuration #1 chosen from 1 choice usb 2-2: uevent usb 2-2: adding 2-2:1.0 (config #1, interface 0) usb 2-2:1.0: uevent drivers/usb/core/inode.c: creating file '002' dmesg has nothing to say, and lusb simply lists nothing else but the two default usb controllers / hubs if we plug the device in before the system is turned on.

    Read the article

  • How to use sessions with django piston auth?

    - by xyld
    The problem is that I want to store authentication in a cookie that I can present to django piston rather than requiring user/password to be typed in each time (without hardcoding or storing the user/pass combo somewhere on disk). I was hoping to accomplish this with cookies like someone would without the piston API. Am I missing something? Django Piston doesn't seem to care about session cookies at all? Or can someone suggest a good alternative? Maybe I shouldn't use Piston?

    Read the article

  • get the expanding node in a treeview

    - by Iulian
    I have a treeview control that functions like a folder browser. Because loading the entire folder structer from disk is taking a lot of time i'm trying to load only one level at a time. So i have a function that adds nodes for all the folders in the current node. I thought that the best method would be to run it on the BeforeExpand event of the treeview. UpdateTreeView(TreeView.SelectedNode); is not working because clicking the + sign to expand is not selecting the node also. So how to find the node that is expanding.

    Read the article

  • Java - copy Jar Folder

    - by Ripei
    Hey Java - Developers Actually I am confronted with a Problem. I've got a ".apk-File" in one Package of my Application. apk is a kind of a jar File (apk = Android Package). I now want to copy this jar-file out of my Programm onto any other Location at the PC. Normally I would do this by using: FileInputStream is = new FileInputStream(this.getClass().getResource("/resources/myApp.apk").getFile()); And then write it on the disk with using a FileOutputStream. ... but since an .apk is a kind of a .jar it doesn't work. It just copies the .apk file. but without the containing other files. any help would be appreciated

    Read the article

  • Displaying local images in the web browser control

    - by RichardK9
    Hi, I am writing a Windows Forms application and am creating a report for users to view in the webBrowser control. The problem is that it does not seem to display an image which is situated on my local hard drive, it just display the "broken image" red cross symbol. The path of the image is correct and when I view the source code of the generated html in either Firefox or Chrome it works but not in Internet Explorer (which I presume it what is used for this webBrowser control). Any help would be much appreciated. Thanks, Richard.

    Read the article

  • Preserve images in Excel headers using Apache POI

    - by ddm
    I am trying to generate Excel reports using Apache POI 3.6 (latest). Since POI has limited support for header and footer generation (text only), I decided to start from a blank excel file with the header already prepared and fill the Excel cells using POI (cf. question 714172). Unfortunately, when opening the workbook with POI and writing it immediately to disk (without any cell manpulation), the header seems to be lost. Here is the code I used to test this behavior: public final class ExcelWorkbookCreator { public static void main(String[] args) { FileOutputStream outputStream = null; try { outputStream = new FileOutputStream(new File("dump.xls")); InputStream inputStream = ExcelWorkbookCreator.class.getResourceAsStream("report_template.xls"); HSSFWorkbook workbook = new HSSFWorkbook(inputStream, true); workbook.write(outputStream); } catch (Exception exception) { throw new RuntimeException(exception); } finally { if (outputStream != null) { try { outputStream.close(); } catch (IOException exception) { // Nothing much to do } } } } }

    Read the article

  • How to execute a command from with in MSI?

    - by okhalid
    Hi, I have a installation setup with works like this: /exec.exe /some-command This whole setup is located on a shared disk to which my target machine have access to. All i want is to create a small MSI wrapper that basically executes the above command. I don't need to any other fancy things? I looked up on the web; there are tools that create MSI for you but they generate huge amount of other things with them as well. My need is very simple and straight forward. It would be great if some could help me with this issue. Thanks, Omer

    Read the article

  • Bug when drawing a QImage on a widget with PIL and PyQt

    - by oulipo
    I'm trying to write a small graphic application, and I need to construct some image using PIL that I show in a widget. The image is correctly constructed (I can check with im.show()), I can convert it to a QImage, that I can save normally to disk (using QImage.save), but if I try to draw it directly on my QWidget, it only show a white square. Here I commented out the code that is not working (converting the Image into QImage then QPixmap result in a white square), and I made a dirty hack to save the image to a temporary file and load it directly in a QPixmap, which work but is not what I want to do https://gist.github.com/f6d479f286ad75bf72b7 Someone has an idea? If it can help, when I try to save my QImage in a BMP file, I can access its content, but if I try to save it to a PNG it is completely white

    Read the article

  • How can I run NUnit(Selenium Grid) tests in parallel?

    - by Benjamin Lee
    My current project uses NUnit for unit tests and to drive UATs written with Selenium. Developers normally run tests using ReSharper's test runner in VS.Net 2003 and our build box kicks them off via NAnt. We would like to run the UAT tests in parallel so that we can take advantage of Selenium Grid/RCs so that they will be able to run much faster. Does anyone have any thoughts on how this might be achieved? and/or best practices for testing Selenium tests against multiple browsers environments without writing duplicate tests automatically? Thank you.

    Read the article

  • UnauthorizedAccessException cannot resolve Directory.GetFiles failure

    - by Ric Coles
    Hi all, Directory.GetFiles method fails on the first encounter with a folder it has no access rights to. The method throws an UnauthorizedAccessException (which can be caught) but by the time this is done, the method has already failed/terminated. The code I am using is listed below: try { // looks in stated directory and returns the path of all files found getFiles = Directory.GetFiles(@directoryToSearch, filetype, SearchOption.AllDirectories); } catch (UnauthorizedAccessException) { } As far as I am aware, there is no way to check beforehand whether a certain folder has access rights defined. In my example, I'm searching on a disk across a network and when I come across a root access only folder, my program fails. I have searched for a few days now for any sort of resolve, but this problem doesn't seem to be very prevalent on here or Google. I look forward to any suggestions you may have, Regards

    Read the article

  • rsync creating thousands of ..ds_store files from mounted volume

    - by daniel Crabbe
    hello there - been using rsync on os x to sync all our website admins. it was working fine until the os x 10.6.3 update! Now it creates thousands of ZeroK folders. It only does it when synching to a mounted drive(which we need to do) as when i sync to my hd it works as usual! i've tried excludes which don't seem to be working... also tried different version of rsync so its an os x issue. echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up KINEMASTIK" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Users/dan/Dropbox/documents/WORK/kinemastik/WEBSITE/youradmin/ echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up CHRIS BROOKS YOURADMIN" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Volumes/Groups/Projects/516_ChrisBrooks/website/youradmin/ anyone experienced the same? many thnks, D.

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >