Search Results

Search found 25660 results on 1027 pages for 'booting issue'.

Page 310/1027 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • Can not call web service with basic authentication using WCF

    - by RexM
    I've been given a web service written in Java that I'm not able to make any changes to. It requires the user authenticate with basic authentication to access any of the methods. The suggested way to interact with this service in .NET is by using Visual Studio 2005 with WSE 3.0 installed. This is an issue, since the project is already using Visual Studio 2008 (targeting .NET 2.0). I could do it in VS2005, however I do not want to tie the project to VS2005 or do it by creating an assembly in VS2005 and including that in the VS2008 solution (which basically ties the project to 2005 anyway for any future changes to the assembly). I think that either of these options would make things complicated for new developers by forcing them to install WSE 3.0 and keep the project from being able to use 2008 and features in .NET 3.5 in the future... ie, I truly believe using WCF is the way to go. I've been looking into using WCF for this, however I'm unsure how to get the WCF service to understand that it needs to send the authentication headers along with each request. I'm getting 401 errors when I attempt to do anything with the web service. This is what my code looks like: WebHttpBinding webBinding = new WebHttpBinding(); ChannelFactory<MyService> factory = new ChannelFactory<MyService>(webBinding, new EndpointAddress( "http://127.0.0.1:80/Service/Service/")); factory.Endpoint.Behaviors.Add(new WebHttpBehavior()); factory.Credentials.UserName.UserName = "username"; factory.Credentials.UserName.Password = "password"; MyService proxy = factory.CreateChannel(); proxy.postSubmission(_postSubmission); This will run and throw the following exception: "The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Basic realm=realm'." And this has an inner exception of: "The remote server returned an error: (401) Unauthorized." Any thoughts about what might be causing this issue would be greatly appreciated.

    Read the article

  • Android:Multi touch doesn't work as expected?

    - by user187532
    Hi folks, Help me in resolving the below issue. I have three image buttons on screen. All these three buttons controlled under ontouchlistner as below. buttonOne.setOnTouchListener(this); buttonTwo.setOnTouchListener(this); buttonThree.setOnTouchListener(this); I override "public boolean onTouch(View v, MotionEvent event)". Under this i check for these three image buttons touch events like below. ImageButton imageBtn = (ImageButton) v; if ( imageBtn == buttonOne ) // first button touch ..Log.. else if ( imageBtn == buttonTwo ) ..Log.. else if ( imageBtn == buttonThree ) // first button touch ..Log.. My problem is, as it is under multi touch event handler like above, it does not detect when touch all three button at a time to try to produce multi touch effect, instead it detects only one imagebutton touch at a time even though i touch all three image buttons. As i am developing this project on Android 1.6 SDK, is there any problem accessing my requirement(multi touch) (or) it is a known issue? I am hoping that, when it works for single button touch, why shouldn't it work when clicking three imagebuttons at a time to produce three logs printed as per my above code? How do i resolve it for my case? Please don't question me why i am still developing on 1.6 for such a requirement. Thank you. Appreciate your suggestions !

    Read the article

  • fabric deploy problem

    - by alexarsh
    Hi, I'm trying to deploy a django app with fabric and get the following error: Alexs-MacBook:fabric alex$ fab config:instance=peergw deploy -H <ip> - u <username> -p <password> [192.168.2.93] run: cat /etc/issue Traceback (most recent call last): File "build/bdist.macosx-10.6-universal/egg/fabric/main.py", line 419, in main File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 37, in deploy checkup() File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 140, in checkup if not 'Ubuntu' in run('cat /etc/issue'): File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 382, in host_prompting_wrapper File "build/bdist.macosx-10.6-universal/egg/fabric/operations.py", line 414, in run File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 65, in __getitem__ File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 140, in connect File "build/bdist.macosx-10.6-universal/egg/paramiko/client.py", line 149, in load_system_host_keys File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 154, in load File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 66, in from_line File "build/bdist.macosx-10.6-universal/egg/paramiko/rsakey.py", line 61, in __init__ paramiko.SSHException: Invalid key Alexs-MacBook:fabric alex$ I can't connect to the server via ssh. What can be my problem? Regards, Arshavski Alexander.

    Read the article

  • External USB attached drive works in Windows XP but not in Windows 7. How to fix?

    - by irrational John
    Earlier this week I purchased this "N52300 EZQuest Pro" external hard drive enclosure from here. I can connect the enclosure using USB 2.0 and access the files in both NTFS partitions on the MBR partitioned drive when I use either Windows XP (SP3) or Mac OS X 10.6. So it works as expected in XP & Snow Leopard. However, the enclosure does not work in Windows 7 (Home Premium) either 64-bit or 32-bit or in Ubuntu 10.04 (kernel 2.6.32-23-generic). I'm thinking this must be a Windows 7 driver problem because the enclosure works in XP & Snow Leopard. I do know that no special drivers are required to use this enclosure. It is supported using the USB mass storage drivers included with XP and OS X. It should also work fine using the mass storage support in Windows 7, no? FWIW, I have also tried using 32-bit Windows 7 on both my desktop, a Gigabyte GA-965P-DS3 with a Pentium Dual-Core E6500 @ 2.93GHz, and on my early 2008 MacBook. I see the same failure in both cases that I see with 64-bit Windows 7. So it doesn't appear to be specific to one hardware platform. I'm hoping someone out there can help me either get the enclosure to work in Windows 7 or convince me that the enclosure hardware is bad and should be RMAed. At the moment though an RMA seems pointless since this appears to be a (Windows 7) device driver problem. I have tried to track down any updates to the mass storage drivers included with Windows 7 but have so far come up empty. Heck, I can't even figure out how to place a bug report with Microsoft since apparently the grace period for Windows 7 email support is only a few months. I came across a link to some USB troubleshooting steps in another question. I haven't had a chance to look over the suggestions on that site or try them yet. Maybe tomorrow if I have time ... ;-) I'll finish up with some more details about the problem. When I connect the enclosure using USB to Windows 7 at first it appears everything worked. Windows detects the drive and installs a driver for it. Looking in Device Manager there is an entry under the Hard Drives section with the title, Hitachi HDT721010SLA360 USB Device. When you open Windows Disk Management the first time after the enclosure has been attached the drive appears as "Not initialize" and I'm prompted to initialize it. This is bogus. After all, the drive worked fine in XP so I know it has already been initialized, partitioned, and formatted. So of course I never try to initialize it "again". (It's a 1 GB drive and I don't want to lose the data on it). Except for this first time, the drive never shows up in Disk Management again unless I uninstall the Hitachi HDT721010SLA360 USB Device entry under Hard Drives, unplug, and then replug the enclosure. If I do that then the process in the previous paragraph repeats. In Ubuntu the enclosure never shows up at all at the file system level. Below are an excerpt from kern.log and an excerpt from the result of lsusb -v after attaching the enclosure. It appears that Ubuntu at first recongnizes the enclosure and is attempting to attach it, but encounters errors which prevent it from doing so. Unfortunately, I don't know whether any of this info is useful or not. excerpt from kern.log [ 2684.240015] usb 1-2: new high speed USB device using ehci_hcd and address 22 [ 2684.393618] usb 1-2: configuration #1 chosen from 1 choice [ 2684.395399] scsi17 : SCSI emulation for USB Mass Storage devices [ 2684.395570] usb-storage: device found at 22 [ 2684.395572] usb-storage: waiting for device to settle before scanning [ 2689.390412] usb-storage: device scan complete [ 2689.390894] scsi 17:0:0:0: Direct-Access Hitachi HDT721010SLA360 ST6O PQ: 0 ANSI: 4 [ 2689.392237] sd 17:0:0:0: Attached scsi generic sg7 type 0 [ 2689.395269] sd 17:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) [ 2689.395632] sd 17:0:0:0: [sde] Write Protect is off [ 2689.395636] sd 17:0:0:0: [sde] Mode Sense: 11 00 00 00 [ 2689.395639] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412003] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412009] sde: sde1 sde2 [ 2689.455759] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.455765] sd 17:0:0:0: [sde] Attached SCSI disk [ 2692.620017] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2707.740014] usb 1-2: device descriptor read/64, error -110 [ 2722.970103] usb 1-2: device descriptor read/64, error -110 [ 2723.200027] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2738.320019] usb 1-2: device descriptor read/64, error -110 [ 2753.550024] usb 1-2: device descriptor read/64, error -110 [ 2753.780020] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2758.810147] usb 1-2: device descriptor read/8, error -110 [ 2763.940142] usb 1-2: device descriptor read/8, error -110 [ 2764.170014] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2769.200141] usb 1-2: device descriptor read/8, error -110 [ 2774.330137] usb 1-2: device descriptor read/8, error -110 [ 2774.440069] usb 1-2: USB disconnect, address 22 [ 2774.440503] sd 17:0:0:0: Device offlined - not ready after error recovery [ 2774.590023] usb 1-2: new high speed USB device using ehci_hcd and address 23 [ 2789.710020] usb 1-2: device descriptor read/64, error -110 [ 2804.940020] usb 1-2: device descriptor read/64, error -110 [ 2805.170026] usb 1-2: new high speed USB device using ehci_hcd and address 24 [ 2820.290019] usb 1-2: device descriptor read/64, error -110 [ 2835.520027] usb 1-2: device descriptor read/64, error -110 [ 2835.750018] usb 1-2: new high speed USB device using ehci_hcd and address 25 [ 2840.780085] usb 1-2: device descriptor read/8, error -110 [ 2845.910079] usb 1-2: device descriptor read/8, error -110 [ 2846.140023] usb 1-2: new high speed USB device using ehci_hcd and address 26 [ 2851.170112] usb 1-2: device descriptor read/8, error -110 [ 2856.300077] usb 1-2: device descriptor read/8, error -110 [ 2856.410027] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 2856.730033] usb 3-2: new full speed USB device using uhci_hcd and address 11 [ 2871.850017] usb 3-2: device descriptor read/64, error -110 [ 2887.080014] usb 3-2: device descriptor read/64, error -110 [ 2887.310011] usb 3-2: new full speed USB device using uhci_hcd and address 12 [ 2902.430021] usb 3-2: device descriptor read/64, error -110 [ 2917.660013] usb 3-2: device descriptor read/64, error -110 [ 2917.890016] usb 3-2: new full speed USB device using uhci_hcd and address 13 [ 2922.911623] usb 3-2: device descriptor read/8, error -110 [ 2928.051753] usb 3-2: device descriptor read/8, error -110 [ 2928.280013] usb 3-2: new full speed USB device using uhci_hcd and address 14 [ 2933.301876] usb 3-2: device descriptor read/8, error -110 [ 2938.431993] usb 3-2: device descriptor read/8, error -110 [ 2938.540073] hub 3-0:1.0: unable to enumerate USB device on port 2 excerpt from lsusb -v Bus 001 Device 017: ID 0dc4:0000 Macpower Peripherals, Ltd Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0dc4 Macpower Peripherals, Ltd idProduct 0x0000 bcdDevice 0.01 iManufacturer 1 EZ QUEST iProduct 2 USB Mass Storage iSerial 3 220417 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 5 Config0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 4 Interface0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Update: Results using Firewire to connect. Today I recieved a 1394b 9 pin to 1394a 6 pin cable which allowed me to connect the "EZQuest Pro" via Firewire. Everything works. When I use Firewire I can connect whether I'm using Windows 7 or Ubuntu 10.04. I even tried booting my Gigabyte desktop as an OS X 10.6.3 Hackintosh and it worked there as well. (Though if I recall correctly, it also worked when using USB 2.0 and booting OS X on the desktop. Certainly it works with USB 2.0 and my MacBook.) I believe the firmware on the device is at the latest level available, v1.07. I base this on the excerpt below from the OS X System Profiler which shows Firmware Revision: 0x107. Bottom line: It's nice that the enclosure is actually usable when I connect with Firewire. But I am still searching for an answer as to why it does not work correctly when using USB 2.0 in Windows 7 (and Ubuntu ... but really Windows 7 is my biggest concern). OXFORD IDE Device 1: Manufacturer: EZ QUEST Model: 0x0 GUID: 0x1D202E0220417 Maximum Speed: Up to 800 Mb/sec Connection Speed: Up to 400 Mb/sec Sub-units: OXFORD IDE Device 1 Unit: Unit Software Version: 0x10483 Unit Spec ID: 0x609E Firmware Revision: 0x107 Product Revision Level: ST6O Sub-units: OXFORD IDE Device 1 SBP-LUN: Capacity: 1 TB (1,000,204,886,016 bytes) Removable Media: Yes BSD Name: disk3 Partition Map Type: MBR (Master Boot Record) S.M.A.R.T. status: Not Supported

    Read the article

  • Programmatically creating vector arrows in KML

    - by mettadore
    Does anyone have any practical examples of programmatically drawing icons as vectors in KML? Specifically, I have data with a magnitude and an azimuth at given coordinates, and I would like to have icons (or another graphical element) generated based on these values. Some thoughts on how I might approach it: Image directory (a brute force way): Make an image director of 360 different image files (probably batch rotate a single image) each pointing in a cooresponding azimuth. I've seen things like "Excel to KML," but am looking for code that I can use within a program, rather than a web utility. Issue: Arrow does not contain magnitude context, so that would have to be a label. I'd rather dynamically lengthen the arrow. Line creation in KML: Perhaps create a formula that creates a line with the origin at the coordinate points, with the length of the line proportional to the magnitute, and angled according to azimuth. There would then be two more lines, perhaps 30 degrees or so extending from the end of the previous line to make the arrow head. Issues: Not a separate image icon, so not sure how it would work in KML. Also not sure how easy it would be to generate this algorithm. Separate image generation: Perhaps create a PHP file that uses imagemagick or something similar to dynamically generate a .png file in a similar method to the above, and then link to the icon using the URI "domain.tld/imagegen.php?magnitude=magvalue&azimuth=azmvalue". Issue: Still have the problem of actually writing the algorithm for image generation. So, the question: has anyone else come up with solutions for programmatic vector (rather than merely arrow) generation?

    Read the article

  • ClickOnce Deployment Error - Access to the Path is Denied

    - by michael.lukatchik
    I have a WPF app that I'm deploying to a network path using ClickOnce deployment. After the app is deployed to a network location, I use the ClickOnce html page to launch the installation process. I am successfully able to download and install the app. However, my users are not able to download and install the app. When a user navigates to the ClickOnce html page and clicks to begin the installation process, the following error message is received: ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of http://software.mycompany.com/myapp/myapp.application resulted in exception. Following failure messages were detected: + Downloading file://dev/webs/software/myapp/myapp.application did not succeed. * [4/5/2010 1:56:59 PM] System.Deployment.Application.DeploymentDownloadException (Unknown subtype) - Downloading file://dev/Webs/software/myapp/myapp.application did not succeed. All signs point to this being a security issue. So, I've done the following: Ensured that "Everyone" had read access to the files that were being deployed as part of my project Ensured that "Everyone" had read access to the network location where the app was deployed (//dev/webs/software/myapp) Ensured that "Everyone" had read access to the IIS path where the ClickOnce html page is located In each of these cases, I've made no progress in getting the app to successfully deploy via ClickOnce. Again, the odd thing is that I am able to successfully walk through the process of downloading and installing the app. It's my users, though, that need the ability to download and install the app. I've looked extensively on the web for answers, but there hasn't been much. I'd like to resolve the issue without "re-installing" or "rigging" anything. I need a solid answer. Thank you all for your input!! Mike

    Read the article

  • mvvm - prismv2 - INotifyPropertyChanged

    - by depictureboy
    Since this is so long and prolapsed and really doesnt ask a coherent question: 1: what is the proper way to implement subproperties of a primary object in a viewmodel? 2: Has anyone found a way to fix the delegatecommand.RaiseCanExecuteChanged issue? or do I need to fix it myself until MS does? For the rest of the story...continue on. In my viewmodel i have a doctor object property that is tied to my Model.Doctor, which is an EF POCO object. I have onPropertyChanged("Doctor") in the setter as such: Private Property Doctor() As Model.Doctor Get Return _objDoctor End Get Set(ByVal Value As Model.Doctor) _objDoctor = Value OnPropertyChanged("Doctor") End Set End Property The only time OnPropertyChanged fires if the WHOLE object changes. This wouldnt be a problem except that I need to know when the properties of doctor changes, so that I can enable other controls on my form(save button for example). I have tried to implement it in this way: Public Property FirstName() As String Get Return _objDoctor.FirstName End Get Set(ByVal Value As String) _objDoctor.FirstName = Value OnPropertyChanged("Doctor") End Set End Property this is taken from the XAMLPowerToys controls from Karl Shifflet, so i have to assume that its correct. But for the life of me I cant get it to work. I have included PRISM in here because I am using a unity container to instantiate my view and it IS a singleton. I am getting change notification to the viewmodel via eventaggregator that then populates Doctor with the new value. The reason I am doing all this is because of PRISM's DelegateCommand. So maybe that is my real issue. It appears that there is a bug in DelegateCommand that does not fire the RaiseCanExecuteChanged method on the commands that implement it and therefore needs to be fired manually. I have the code for that in my onPropertyChangedEventHandler. Of course this isnt implemented through the ICommand interface either so I have to break and make my properties DelegateCommand(of X) so that I have access to RaiseCanExecuteChanged of each command.

    Read the article

  • Is there an extension or a way to write an extension for firefox that allows a developer to refresh

    - by Kevin J
    I'm a developer, and I spend much of my day refreshing the webapps I work on. Occasionally, I'll encounter pages where POST data was submitted, and firefox will prompt me to Resend POST data or to Cancel. Now, I know that I can just redirect a page to itself to get rid of this warning, but I still want to keep this warning for our users; I just want to be able to skip it while developing. It's also not just on one page, but at many different points throughout the app, so it's not like I can just do if $debug==true then redirect or something like that. Basically just a minor convenience issue, but when I encounter the message 50-100 times a day, it can get aggravating. What I want to do is essentially have 3 options when refreshing: Resend POST data, cancel, or refresh without resending POST data The third option would be equivalent to clicking "enter" in the address bar (which is what I end up having to do). The problem with clicking enter is that I often have to "hard refresh" using ctrl+shift+r, but if I do this with POST data I have to click cancel, then click enter on the address bar, then do a hard refresh after that. I would instead like to press ctrl+shift+r, then continue hard refreshing the page without the POST data. Does anyone know how to do this? Through an extension or otherwise? It's totally a minor issue, but it's something that constantly bothers me and I actually think it would be quite a useful option. Thanks

    Read the article

  • Why does Tfs2010 build my Wix project before anything else?

    - by bwerks
    Hi all, A similar question was asked and answered about a year ago, but was either a different issue (everything was in beta) or misdiagnosed. It's located here: http://stackoverflow.com/questions/688162/msbuild-task-fails-because-any-cpu-solution-is-built-out-of-order. My issue is that I have a wix installer project, and after upgrading to Tfs2010 on monday, the build fails on linking because it can't find the build product of the Wpf application in the project. After some digging, it's because it hasn't been built yet. Building in Vs2010 works as normal. The wix project is set to depend on the Wpf project, and when viewing Project Build Order in the IDE, everything looks as normal. The problem was originally encountered with only two platform definitions in the solution; x86 and x64. There are also two flavors, Debug and Release, and TFSBuild.proj is set to build all four combinations. There was no occurence of AnyCPU anywhere. Per the referenced question above, I tried changing the Wpf project to use AnyCPU so that it would be built first. At this point, the wix project used the exact configuration and the Wpf project used the flavor with AnyCPU. However, doing so didn't seem to change anything. I'm using the Tfs2010 RTM, Vs2010 RTM, and the most recent version of Wix, which at the time of this writing is 3.5.1602.0, from 2010-04-02. Anyone else running into this?

    Read the article

  • Is there a workaround for JDBC w/liquibase and MySQL session variables & client side SQL instructions

    - by David
    Slowly building a starter changeSet xml file for one of three of my employer's primary schema's. The only show stopper has been incorporating the sizable library of MySQL stored procedures to be managed by liquibase. One sproc has been somewhat of a pain to deal with: The first few statements go like use TargetSchema; select "-- explanatory inline comment thats actually useful --" into vDummy; set @@session.sql_mode='TRADITIONAL' ; drop procedure if exists adm_delete_stats ; delimiter $$ create procedure adm_delete_stats( ...rest of sproc I cut out the use statement as its counter-productive, but real issue is the set @@session.sql_mode statement which causes an exception like liquibase.exception.MigrationFailedException: Migration failed for change set ./foobarSchema/sprocs/adm_delete_stats.xml::1293560556-151::dward_autogen dward: Reason: liquibase.exception.DatabaseException: Error executing SQL ... And then the delimiter statement is another stumbling block. Doing do dilligence research I found this rejected MySQL bug report here and this MySQL forum thread that goes a little bit more in depth to the problem here. Is there anyway I can use the sproc scripts that currently exist with Liquibase or would I have to re-write several hundred stored procedures? I've tried createProcedure, sqlFile, and sql liquibase tags without much luck as I think the core issue is that set, delimiter, and similar SQL commands are meant to be interpreted and acted upon by the client side interpreter before being delivered to the server.

    Read the article

  • Strange rare out-of-order data received using Indy

    - by Jim
    We're having a bizarre problem with Indy10 where two large strings (a few hundred characters each) that we send out one after the other are appearing at the other end intertwined oddly. This happens extremely infrequently. Each string is a complete XML message terminated with a LF and in general the READ process reads an entire XML message, returning when it sees the LF. The call to actually send the message is protected by a critical section around the call to the IOHandler's writeln method and so it is not possible for two threads to send at the same time. (We're certain the critical section is implemented/working properly). This problem happens very rarely. The symptoms are odd...when we send string A followed by string B what we received at the other end (on the rare occasions where we have failure) is the trailing section of string A by itself (i.e., there's a LF at the end of it) followed by the leading section of string A and then the entire string B followed by a single LF. We've verified that the "timed out" property is not true after the partial read - we log that property after every read that returns content. Also, we know there are no embedded LF characters in the string, as we explicitly replace all non-alphanumeric characters in the string with spaces before appending the LF and sending it. We have log mechanisms inside the critical sections on both the transmission and receiving ends and so we can see this behavior at the "wire". We're completely baffled and wondering (although always the lowest possibility) whether there could be some low-level Indy issues that might cause this issue, e.g., buffers being sent in the wrong order....very hard to believe this could be the issue but we're grasping at straws. Does anyone have any bright ideas?

    Read the article

  • Create a virtual serial port for widcomm stack under 32feet

    - by i13m
    Hi, all Currently I am doing a project involves a bluetooth communication setup between a PDA and a small embedded device. This small embedded device can only be communicated with a virtual serial port over a bluetooth link. The PDA is the ipaq running with windows mobile 6, and I am using c#. I had done a program which can communication with the serial port over bluetooth. But the only issue is every time I run this program, I have to active the bluetooth radio, and manually pairing this device with the pda via the bluetooth manager. What I want to do is when running this program, it can establish the bluetooth connection between the pda and the embedded module. So I am using functions from the 32feet prject. This is one issue is I cant make the virutal serial port part, as I think the 32feet project can only make virual serial ports for the window bluetooth stack but not the widcomm bluetooth stact, which the ipaq is using. Therefore, are there any existing c# classes or stacks that can make virtual serial port under widcomm for windows mobile 6. Thanks

    Read the article

  • AdMob ad in iPhone app makes App content disappear when "done" is pressed!

    - by nephilite
    Hello All, When I return from an adMob ad by hitting "done" the content of my app has disappeared ! All that remains is a background image I had attached directly to the main window. Oddly I can still hear the result of my touch events from my main screen (which is now gone). This may be related to the issue some people have had regarding a 20 pixel move involving the toolbar...I see something to that effect as the ad starts to overlay. I have admob in another app that is working fine, and I notice when the ad opens in that app the ad content fills the whole screen EXCEPT the top toolbar (it starts right under it). In the new app I'm working on right now this isn't the case. When the ad starts to open I see the tool bar vanish, then the add comes in and fills the entire screen (including the area where the tool bar was); then when I click done and the the ad goes away everything under it is gone as well. It may be worth noting that the App I had working was 2.x and the current app is 3.x (and thus using the admob 3.0 libraries). This is very odd and deal-breaking; I need help ASAP The relevant part of my view hierarchy is as follows: AppDelegate - ViewController - MainView (Ad is in here) There are also some other Views that are children of the ViewController and a UITabbar is also a subview of the ViewController (programmatically declared, not a UITabBar Controller). Any help you can offer would be extremely appreciate...I need to resolve this issue ASAP, release in two days! Thanks!!

    Read the article

  • complex sql which runs extremely slow when the query has order by clause

    - by basit.
    I have following complex query which I need to use. When I run it, it takes 30 to 40 seconds. But if I remove the order by clause, it takes 0.0317 sec to return the result, which is really fast compare to 30 sec or 40. select DISTINCT media.* , username from album as album , album_permission as permission , user as user, media as media where ((media.album_id = album.album_id and album.private = 'yes' and album.album_id = permission.album_id and (permission.email = '' or permission.user_id = '') ) or (media.album_id = album.album_id and album.private = 'no' ) or media.album_id = '0' ) and media.user_id = user.user_id and media.media_type = 'video' order by media.id DESC LIMIT 0,20 The id on order by is primary key which is indexed too. So I don't know what is the problem. I also have album and album permission table, just to check if media is public or private, if private then check if user has permission or not. I was thinking maybe that is causing the issue. What if I did this in sub query, would that work better? Also can someone help me write that sub query, if that is the solution? If you can't help write it, just at least tell me. I'm really going crazy with this issue.. SOLUTION MAYBE Yes, I think sub-query would be best solution for this, because the following query runs at 0.0022 seconds. But I'm not sure if validation of an album would be accurate or not, please check. select media.*, username from media as media , user as user where media.user_id = user.user_id and media.media_type = 'video' and media.id in (select media2.id from media as media2 , album as album , album_permission as permission where ((media2.album_id = album.album_id and album.private = 'yes' and album.album_id = permission.album_id and (permission.email = '' or permission.user_id = '')) or (media.album_id = album.album_id and album.private = 'no' ) or media.album_id = '0' ) and media.album_id = media2.album_id ) order by media.id DESC LIMIT 0,20

    Read the article

  • WCF Message Debugging on WebHttpBehavior

    - by Programming Hero
    I've created a custom binding in WCF for a custom MessageEncoder to allow messages to be written as XML using a wider range of encodings than WCF supports out of the box. The encoder appears to be working and I am able to send and receive messages, but I want to verify that the XML message being written is exactly as required by the service I am trying to consume. I've turned on message logging for WCF using the diagnostic trace listeners to output the messages sent and received over the wire to a log file. Unfortunately, for calls using my encoder, the message is displayed as ... stream ... EDIT: I don't think it's anything to do with my custom encoding. I have experimented with my custom binding a little, switching to using the built-in text encoding and http transport. I still don't get a message body logged in the message trace. EDIT2: Having done further investigation, the issue looks to be related not to the custom binding, but the custom behaviour. I'm apply the <webHttp/> behaviour. Once this is specified (along with manual addressing), the tracing behaviour shows up. Is this a known issue with WebHttpBehavior? Am I still barking up the wrong tree?

    Read the article

  • Authlogic, logout, credential capture and security

    - by Paddy
    Ok this is something weird. I got authlogic-oid installed in my rails app today. Everything works perfectly fine but for one small nuisance. This is what i did: I first register with my google openid. Successful login, redirection and my email, along with my correct openid is stored in my database. I am happy that everything worked fine! Now when i logout, my rails app as usual destroys the session and redirects me back to my root url where i can login again. Now if i try to login it still remembers my last login id. Not a big issue as i can always "Sign in as a different user" but i am wondering if there is anyway to not only logout from my app but also logout from google. I noticed the same with stack overflow's openid authentication system. Why am i so bothered about this, you may ask. But is it not a bad idea if your web apps end user, who happens to be in a cyber cafe, thinks he has logged out from your app and hence from his google account only to realize later that his google account had got hacked by some unworthy loser who just happened to notice that the one before him had not logged out from google and say.. changed his password!! Should i be paranoid? Isn't this a major security lapse while implementing the openid spec? Probably today someone can give me a workaround for this issue and the question is solved for me. But what about the others who have implemented openid in their apps and not implemented a workaround?

    Read the article

  • Javascript "Member not found" error in IE8

    - by Steven
    I'm trying to debug the following block of Javascript code to see what the issue is. I'm getting an error that says "Member not found" on the line constructor = function() { in the extend:function() method. I'm not very good with Javascript, and I didn't write this, so I'm kind of lost on what the issue is. The error only occurs in IE8, it works fine in IE7 and Firefox. var Class = { create: function() { return function() { if(this.destroy) Class.registerForDestruction(this); if(this.initialize) this.initialize.apply(this, arguments); } }, extend: function(baseClassName) { constructor = function() { var i; this[baseClassName] = {} for(i in window[baseClassName].prototype) { if(!this[i]) this[i] = window[baseClassName].prototype[i]; if(typeof window[baseClassName].prototype[i] == 'function') { this[baseClassName][i] = window[baseClassName].prototype[i].bind(this); } } if(window[baseClassName].getInheritedStuff) { window[baseClassName].getInheritedStuff.apply(this); } if(this.destroy) Class.registerForDestruction(this); if(this.initialize) this.initialize.apply(this, arguments); } constructor.getInheritedStuff = function() { this[baseClassName] = {} for(i in window[baseClassName].prototype) { if(!this[i]) this[i] = window[baseClassName].prototype[i]; if(typeof window[baseClassName].prototype[i] == 'function') { this[baseClassName][i] = window[baseClassName].prototype[i].bind(this); } } if(window[baseClassName].getInheritedStuff) { window[baseClassName].getInheritedStuff.apply(this); } } return constructor; }, objectsToDestroy : [], registerForDestruction: function(obj) { if(!Class.addedDestructionLoader) { Event.observe(window, 'unload', Class.destroyAllObjects); Class.addedDestructionLoader = true; } Class.objectsToDestroy.push(obj); }, destroyAllObjects: function() { var i,item; for(i=0;item=Class.objectsToDestroy[i];i++) { if(item.destroy) item.destroy(); } Class.objectsToDestroy = null; } }

    Read the article

  • How do I escape a LIKE clause using NHibernate Criteria?

    - by Jon Seigel
    The code we're using is straight-forward in this part of the search query: myCriteria.Add( Expression.InsensitiveLike("Code", itemCode, MatchMode.Anywhere)); and this works fine in a production environment. The issue is that one of our clients has item codes that contain % symbols which this query needs to match. The resulting SQL output from this code is similar to: SELECT ... FROM ItemCodes WHERE ... AND Code LIKE '%ItemWith%Symbol%' which clearly explains why they're getting some odd results in item searches. Is there a way to enable escaping using the programmatic Criteria methods? Addendum: We're using a slightly old version of NHibernate, 2.1.0.4000 (current as of writing is 2.1.2.4853), but I checked the release notes, and there was no mention of a fix for this. I didn't find any open issue in their bugtracker either. We're using SQL Server, so I can escape the special characters (%, _, [, and ^) in code really easily, but the point of us using NHibernate was to make our application database-engine-independent as much as possible. Neither Restrictions.InsensitiveLike() nor HqlQueryUtil.GetLikeExpr() escape their inputs, and removing the MatchMode parameter makes no difference as far as escaping goes.

    Read the article

  • Converting Milliseconds to Timecode

    - by Jeff
    I have an audio project I'm working on using BASS from Un4seen. This library uses BYTES mainly but I have a conversion in place that let's me show the current position of the song in Milliseconds. Knowing that MS = Samples * 1000 / SampleRate and that Samples = Bytes * 8 / Bits / Channels So here's my main issue and it's fairly simple... I have a function in my project that converts the Milliseconds to TimeCode in Mins:Secs:Milliseconds. Public Function ConvertMStoTimeCode(ByVal lngCurrentMSTimeValue As Long) ConvertMStoTimeCode = CheckForLeadingZero(Fix(lngCurrentMSTimeValue / 1000 / 60)) & ":" & _ CheckForLeadingZero(Int((lngCurrentMSTimeValue / 1000) Mod 60)) & ":" & _ CheckForLeadingZero(Int((lngCurrentMSTimeValue / 10) Mod 100)) End Function Now the issue comes within the Seconds calculation. Anytime the MS calculation is over .5 the seconds place rounds up to the next second. So 1.5 seconds actually prints as 2.5 seconds. I know for sure that using the Int conversion causes a round down and I know my math is correct as I've checked in a calculator 100 times. I can't figure out why the number is rounding up. Any suggestions?

    Read the article

  • Duration of Excessive GC Time in "java.lang.OutOfMemoryError: GC overhead limit exceeded"

    - by jilles de wit
    Occasionally, somewhere between once every 2 days to once every 2 weeks, my application crashes in a seemingly random location in the code with: java.lang.OutOfMemoryError: GC overhead limit exceeded. If I google this error I come to this SO question and that lead me to this piece of sun documentation which expains: The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line. Which tells me that my application is apparently spending 98% of the total time in garbage collection to recover only 2% of the heap. But 98% of what time? 98% of the entire two weeks the application has been running? 98% of the last millisecond? I'm trying to determine a best approach to actually solving this issue rather than just using -XX:-UseGCOverheadLimit but I feel a need to better understand the issue I'm solving.

    Read the article

  • Debugging SQL Server Slowness: Same Database, Different Servers

    - by Craig Walker
    For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue. Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one. As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing. What things should I be looking at? What tools can I use to investigate why this is happening?

    Read the article

  • IE7 - visited links revert to unvisted after page refresh

    - by Gerald
    Hello, A number of our users have just upgraded from IE6 to IE7. the upgreaded users are reporting an issue with visited links reverting to their unvisited color after a page refresh. This only happens to links that are using javascript instead of a hard coded URL: <script lang="JavaScript"> <!-- function LoadGoogle() { var LoadGoogle = window.open('http://www.google.com'); } --> </script> <a href="javascript:LoadGoogle()">Google using javascript</a> <a href="#" OnClick="javascript:LoadGoogle()">Google using javascript OnClick</a> The above links will revert back to the unvisited color whenever the page is refreshed. It doesn't matter if the page is refreshed because of a post back, manually hitting the refresh or f5 button, or from an auto-refresh function. Please note, the above code is an over simplification of what is actually happening, but I believe it illustrates the issue well enough. This is causing a problem for our users because we are providing them with a list of items that are all opened into new windows via javascript when they are clicked; and refresh the parent page when the users are finished with them. Each time the parent page is refreshed all of these links revert back to their unvisited color, so our users are losing track of which items they've worked on. I've been digging around and it looks like this is intended behavior. IE7 doesn't register these links with the browsers history. Does anyone know a work around that will allow us to keep these javascript links in the visited state without having to do a major overhaul of the apps code? Thank you.

    Read the article

  • Losing Session between Classic ASP and ASP.NET

    - by Aaron
    The company that I work for is making a transition between classic ASP programs and ASP.NET programs for our intranet software. Eventually, everything will be written in ASP.NET, but because of time constraints, there are still a number of programs that use classic ASP. To compensate we've written features that allow for redirects and autologins between classic ASP programs and ASP.NET programs. I've been starting to see a problem, though, with holding the session state for our ASP.NET software. If a user uses an ASP.NET program, then does work in a classic ASP program, then goes back to that ASP.NET program, often times, the user's authentication for the ASP.NET program is still in place, yet the user's session is lost, resulting in an error whenever a function is performed within the program. I'm trying to capture the loss of the session state in global.asax's Session_End event, which would redirect the user to the login page, but that hasn't worked. Has anyone else faced a similar issue with users moving back and forth between classic ASP and ASP.NET and losing sessions? Is that even my real issue here? It's the only thing that I can see as being a problem. Thanks for any help.

    Read the article

  • Timeout reading verity collection - CF8

    - by Gary
    For a long time now I've been having a problem with using the verity search service bundled with ColdFusion 8. The issue is with timeout errors occurring when perfoming any operation on a collection. It's intermittent, and usually occurs after a few operations have been successfully performed. For instance: If I'm adding records to a collection the first, say 15 records, will go through with no problems, but all subsequent records will timeout until the service is rebooted. I'm on a shared server, Windows 2008, 64bit as far as I know. The error I receive is: "An error occurred while performing an operation in the Search Engine library. Error reading collection information.: com.verity.api.administration.ConfigurationException: java.io.IOException: Read timed out" Having spoken to my hosting company, and after doing some research, it's been suggested that the number of collections on a server may cause this issue. I've reduced the amount of collections I use, and there are currently 39 collections on the server. As I'm on a shared server, I have no control over how many collections other customers use, however I've read that the limit is 128 collections, so I don't see why 39 should cause it to become unusable. The collections aren't big, there's maybe around 5,000 records between all of them. Any ideas?

    Read the article

  • Hiding a UINavigationController's UIToolbar during viewWillDisappear:

    - by Nathan de Vries
    I've got an iPhone application with a UITableView menu. When a row in the table is selected, the appropriate view controller is pushed onto the application's UINavigationController stack. My issue is that the MenuViewController does not need a toolbar, but the UIViewControllers which are pushed onto the stack do. Each UIViewController that gets pushed calls setToolbarHidden:animated: in viewDidAppear:. To hide the toolbar, I call setToolbarHidden:animated: in viewWillDisappear:. Showing the toolbar works, such that when the pushed view appears the toolbar slides up and the view resizes correctly. However, when the back button is pressed the toolbar slides down but the view does not resize. This means that there's a black strip along the bottom of the view as the other view transitions in. I've tried adding the toolbar's height to the height of the view prior to hiding the toolbar, but this causes the view to be animated during the transition so that there's still a black bar. I realise I can manage my own UIToolbar, but I'd like to use UINavigationControllers built in UIToolbar for convenience. This forum post mentions the same issue, but no workaround is mentioned.

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >