Search Results

Search found 428 results on 18 pages for 'disappeared'.

Page 16/18 | < Previous Page | 12 13 14 15 16 17 18  | Next Page >

  • WindowsFormsApplicationBase SplashScreen makes login form ignore keypresses until I click on it - how to debug?

    - by Tom Bushell
    My WinForms app has a simple modal login form, invoked at startup via ShowDialog(). When I run from inside Visual Studio, everything works fine. I can just type in my User ID, hit the Enter key, and get logged in. But when I run a release build directly, everything looks normal (the login form is active, there's a blinking cursor in the User ID MaskedEditBox), but all keypresses are ignored until I click somewhere on the login form. Very annoying if you are used to doing everything from the keyboard. I've tried to trace through the event handlers, and to set the focus directly with code, to no avail. Any suggestions how to debug this (outside of Visual Studio), or failing that - a possible workaround? Edit Here's the calling code, in my Main Form: private void OfeMainForm_Shown(object sender, EventArgs e) { OperatorLogon(); } private void OperatorLogon() { // Modal dialogs should be in a "using" block for proper disposal using (var logonForm = new C21CfrLogOnForm()) { var dr = logonForm.ShowDialog(this); if (dr == DialogResult.OK) SaveOperatorId(logonForm.OperatorId); else Application.Exit(); } } Edit 2 Didn't think this was relevant, but I'm using Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase for it's splash screen and SingleInstanceController support. I just commented out the splash screen code, and the problem has disappeared. So that's opened up a whole new line of inquiry... Edit 3 Changed title to reflect better understanding of the problem

    Read the article

  • Joomla ImageBrowser Lightbox not working

    - by jmorhardt
    I've been using an image browser for Joomla called ImageBrowser. It's great - easy uploads, easy for clients to use, even handles zip file uploads. I have Joomla v. 1.5.15 installed and the newest version of this plugin as well on 3 or 4 of our sites with no fuss or issues. Recently, the ImageBrowser on one of our sites started acting strangely. It was as if the Lightbox effect that we should have disappeared completely. I compared the settings for the site to another and found they were identical. I couldn't find a solution in the documentation or forums. Here's the URL to look at: http://neda.us/photos?view=gallery&folder=BSU+12-5-09. You can compare that to another of our sites with the same plugin and settings at South Oak Floors dot COM. You should be able to click on a thumbnail and get a full size view of the image in a Lightbox. Any help much needed and much appreciated.

    Read the article

  • Can I create an activity for a particular task without that task coming to the foreground?

    - by Neil Traft
    Here's my use case: The app starts at a login screen. You enter your credentials and hit the "Login" button. Then a progress dialog appears and you wait for some stuff to download. Once the stuff has downloaded, you are taken to a new activity. Exactly which activity you are taken to depends on the server response. Here's my problem: If you go HOME during this login/download process, at some point in the near future your download will complete and will invoke startActivity(). So then the new activity will be pushed to the foreground, rudely interrupting the user. I can't start the activity before I start the download, because, as I mentioned earlier, the activity I start depends on the result of the download. I would obviously not like to interrupt the user like this. One way to solve this is to refrain from calling startActivity() until the user returns to the app. I can do this by keeping track of the LoginActivity's onStop() and onRestart(). But I'm wondering, is there any way to create the activity while it is in the background? That way the user returns to the app and he is ready to go... otherwise he would have to wait for the new activity to be created (which could take some time because the new activity also has to download and display some data). Update: Guess what? I LIED! I could have sworn that starting this activity was causing it to come to the foreground, but I went back to test it again and the problem has magically disappeared. I tested in both 1.6 and 2.0.1 and both OSes were smart enough not to bring a backgrounded task to the front.

    Read the article

  • What are the Limitations for Connecting to an Access Query in Excel

    - by thornomad
    I have an Access 2007 database that has a number of tables, some are fairly large (100,000+ records); I have created a union query to pull some of the same types of data from multiple tables into one large query for pivot table manipulation and reporting. For example: SELECT Language FROM Table1 UNION ALL SELECT Language FROM Table2 UNION ALL SELECT Language FROM Table3; This works. I found, quickly, however, that a union query will not show up when connecting to the datasource from Excel 2007. So, I created a second query to reference the union query. Like so: SELECT * FROM [The Above Union Query]; This query works and it, initially, was accessible from Excel. Time passed, I've added more data. Suddenly, when I connect to my Access database from Excel my query referencing the union has disappeared. MS Access shows no signs of an issue (data displays in Access) and my other non-union queries are showing up in Excel 2007 ... but not the one that references the union. What could be going on? Why did it disappear? I noticed if I switch some of the referenced tables in the union query to a smaller table (with less rows) all of sudden the query appears in Excel again. At least, I think that's what the difference is. I really can't put my finger on why some of the union queries won't show up and some will. Am stumped and need some guidance. Thanks.

    Read the article

  • The SVG text node disappear after change its text content

    - by sureone
    svg: <text xml:space="preserve" y="228" x="349.98" text-anchor="middle" stroke-width="0" stroke-linejoin="null" stroke-linecap="null" stroke-dasharray="null" stroke="#000000" fill="#000000" style="cursor: move; pointer-events: inherit;" font-size="24" font-family="serif" id="cur_b">cur_b</text> <text xml:space="preserve" y="222" x="103.98" text-anchor="middle" stroke-width="0" stroke-linejoin="null" stroke-linecap="null" stroke-dasharray="null" stroke="#000000" fill="#000000" style="cursor: move; pointer-events: inherit;" font-size="24" font-family="serif" id="cur_a">cur_a</text> <text xml:space="preserve" y="229" x="590.0211" text-anchor="middle" stroke-width="0" stroke-linejoin="null" stroke-linecap="null" stroke-dasharray="null" stroke="#000000" fill="#000000" style="cursor: move; pointer-events: inherit;" font-size="24" font-family="serif" id="cur_c">cur_c</text> NSString* theJS = @ "var theNode0 = document.getElementById('cur_a'); theNode0.textContent='200A'; theNode0.setAttribute('fill','#FF0000'); var theNode1 = document.getElementById('cur_c'); theNode1.textContent='200A'; theNode1.setAttribute('fill','#00FF00');" [self.webView stringByEvaluatingJavaScriptFromString:theJS]; The SVG text node value is changed but disappeared after about one second.

    Read the article

  • Hylafax / Capi4hylafax: faxgetty does not recognize number of lines

    - by Wrikken
    We've got a T.30 card, 30 working lines on it, but for some reason, if I add more then 30 faxes in the queue at any time (and we're busy enough at peak times that this happens a lot), faxgetty sends faxes to non-existent lines and they appear in the error queue as a 'busy' signal on the line, which results in a lot of failed faxes because the counter of max 3 tries increases rapidly. This is using faxgetty (USE_FAXGETTY="y" in /etc/default/hylafax). I've inherited this thing, so I'm not entirely sure how faxgetty is supposed to know the number of lines. However, if I alter the script to faxmodem (USE_FAXGETTY="n" in /etc/default/hylafax and manually enabling 30 modems), this behavior goes away (new faxes 'wait' for a line to be available before trying to send, so each try / fail is a valid one on a working line, majorly descreasing the amount of failed faxes. However, when researching this almost anyone talks about faxgetty being the preferred, more robust, method, and on top of that for some unexplained reason all FIFO's disappeared for some reason after several errorless hours with faxmodem, forcing a hylafax restart using faxgetty until we figured out why this faxmodem solution failed (which is another question, and somewhat out of scope here). Environment: Debian 2.6.26-2-amd64 capi4hylafax 1:01.03.00.99.svn.300-12 hylafax-client 2:4.4.4-10.1 hylafax-server 2:4.4.4-10.1 Config --hfaxd.conf-- LogFacility: daemon ServerTracing: 0x1ff --hyla.conf-- Host: localhost Verbose: No VRes: 196 TimeZone: local DialRules: "/etc/hylafax/dialrules.europe" --/etc/hylafax/config -- InternationalPrefix: 00 LongDistancePrefix: 0 AreaCode: 99999 CountryCode: 31 DialStringRules: "etc/dialrules.europe" ModemGroup: any:faxCAPI SendFaxCmd: "/usr/bin/wrapc2faxsend" --/etc/hylafax/config.faxCAPI -- SpoolDir: /var/spool/hylafax FaxRcvdCmd: /var/spool/hylafax/bin/faxrcvd PollRcvdCmd: /var/spool/hylafax/bin/pollrcvd FaxReceiveUser: uucp FaxReceiveGroup: dialout LogFile: /var/spool/hylafax/log/capi4hylafax #no, checking this log did not yield anything interesting LogTraceLevel: 4 LogFileMode: 0600 ModemGroup: any:faxCAPI #repeats of faxCAPI2 = faxCAPI30, with of course another devicename/local ident: { HylafaxDeviceName: faxCAPI RecvFileMode: 0600 FAXNumber: ****redacted**** LocalIdentifier: ****some-ident-per-device*** MaxConcurrentRecvs: 0 OutgoingController: 1 OutgoingMSN: SuppressMSN: 0 NumberPrefix: NumberPlusReplacer: "00" UseISDNFaxService: 0 RingingDuration: 0 { Controller: 1 AcceptSpeech: 0 UseDDI: 0 DDIOffset: DDILength: 0 IncomingDDIs: IncomingMSNs: AcceptGlobalCall: 1 } } So in short: How does faxgetty determine the number of lines available? (the man page isn't terribly revealing, and I can't find an appropriate setting in hylafax-config. And how can I get a capi4hylafax/hylafax setup which queues more faxes then lines are available correctly without immediately incrementing the fail count? We will not be receiving any faxes on this machine b.t.w. As I said, I've inherited this thing, so if there are important configuration options I'm not including, please let me know.

    Read the article

  • StrongSwan + xl2tpd client timeout between 2-5 minutes

    - by Howard Guo
    I run CentOS 6.4 on Amazon EC2, using xl2tpd-1.3.1 from EPEL repository together with StrongSwan 5.0.4. I setup a simple IPSec connection: conn l2tp type=transport keyexchange=ikev1 rekey=no authby=psk leftsubnet=0.0.0.0/0 rightsubnet=0.0.0.0/0 compress=yes auto=add And here is xl2tpd.conf: [global] ipsec saref = yes [lns default] ip range = 192.168.0.2-192.168.0.250 local ip = 192.168.0.1 ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes Here is options.xl2tpd: ms-dns 8.8.4.4 auth lock debug proxyarp There is only one client - Android 4.2 Android connects successfully: Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Connection established to x.x.x.x, 59578. Local: 18934, Remote: 29291 (ref=0/0). LNS session is 'default' Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Call established with x.x.x.x, Local: 36452, Remote: 29845, Serial: -1369754322 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: pppd 2.4.5 started by howard, uid 0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Using interface ppp0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Connect: ppp0 <--> /dev/pts/0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: peer from calling number x.x.x.x authorized Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Deflate (15) compression enabled Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: Cannot determine ethernet address for proxy ARP Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: local IP address 192.168.0.1 Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: remote IP address 192.168.0.2 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 disappeared from ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] interface ppp0 activated In the meanwhile, Internet works perfectly on the Android client, the VPN connection is stable and fast. However, it always happens that within 2-5 minutes after the connection is established: Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Maximum retries exceeded for tunnel 18934. Closing. Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Connection 29291 closed to 95.91.227.224, port 59578 (Timeout) Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deactivated Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deleted Then the VPN connection is broken. So what might have gone wrong? The same L2TP service works flawlessly on iOS 7, MacOS 10.8, and Windows 7, there is no disconnection issue on those OSes. Thank you!

    Read the article

  • Linux Fiber Channel Host Setup Basic

    - by Jim
    I've been googling for about 4 hours now with no luck. I am trying to setup a Linux server running Oracle Server 6.3 as a Fiber Channel host. And then connect it to a Dell Compellent Fibre Channel Host contain a 500GB Volume. The Oracle server itself contains two Brocade 815 FC HBAs. I've discovered their WWN(I think) via cat /sys/class/fc_host/host1/port_name 0x100000051efc3d85 cat /sys/class/fc_host/host2/port_name 0x100000051efc3d9f The next part is where I am at a loss. I've used iSCSI before...is FC the same deal where you have an initiator and a target? If so where do I specific that in linux? I'm also new to Fiber Channel as a protocol, so i am unsure what is needed to make a transaction? WWN and port ID? Similar to IP:Port combination in the Ethernet world. I've read alot regarding using systool, multipath, fc_transport commands, however none of these is recognized as a valid command from Oracle Server 6.3 Appreciate the guidance and assistance. I installed sccsi-target-utils and can now run rescan-scsi-bus and sg_map -x. rescan-scsi-bus.sh -l -w -r Host adapter 0 (megaraid_sas) found. Host adapter 1 ((null)) found. Host adapter 2 ((null)) found. Host adapter 3 (ata_piix) found. Host adapter 4 (ata_piix) found. Scanning SCSI subsystem for new devices and remove devices that have disappeared Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, LUNs 0 1 2 3 4 5 6 7 Scanning for device 0 2 0 0 .... OLD: Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning for device 0 2 1 0 ... OLD: Host: scsi0 Channel: 02 Id: 01 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 1 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 1 0 3 1 ... OLD: Host: scsi1 Channel: 00 Id: 03 Lun: 01 Vendor: COMPELNT Model: Compellent Vol Rev: 0505 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 2 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning host 3 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 3 0 0 0 ... REM: Host: scsi3 Channel: 00 Id: 00 Lun: 00 DEL: Vendor: TEAC Model: DVD-ROM DV-28SW Rev: R.2A Type: CD-ROM ANSI SCSI revision: 05 Scanning host 4 channels 0 for SCSI target IDs 0, LUNs 0 1 2 3 4 5 6 7 0 new device(s) found. 1 device(s) removed. and sg_map -x /dev/sg0 0 0 32 0 13 /dev/sg1 0 2 0 0 0 /dev/sda /dev/sg2 0 2 1 0 0 /dev/sdb /dev/sg4 1 0 3 1 0 /dev/sdc I'm not sure what this all means...

    Read the article

  • Restore single users Exchange 2003 mailbox from backup

    - by Campo
    I take weekly backups of exchange in full. I also take complete weekly backups of the entire server. It is a Server 2003 R2 with AD and Exchange 2003 all on one box. One users inbox has disappeared. She has 19000+ junk items now. It is possible the inbox got mixed into the junk. Regardless it is such a huge mess she is not going to go through all of that.... I want to restore he mailbox from the backup. I followed this MS KB http://support.microsoft.com/kb/823176 I had to use Method 3. I have a VM of Server 2003 R2 with exchange but I am having failures on the restore from NT backup. The backup log just states to check the application log.... Application log points to backup log... Only info Is failed to restore Only thing different is the computer name... The only error I can find is in the Application log. Information Store Database not found All others just say that the backup failed. Any assistance is greatly appreciated. UPDATE I have successfully proven I can restore the DB into a recovery storage group in my VM Unfortunately due to the actual account being on a different store I am unable to do the recovery... Error is The attempt to log on to the Microsoft Exchange Server computer has failed. The MAPI provider failed. Microsoft Exchange Server Information Store ID no: 8004011d-0512-00000000 Two questions QUESTION 1 Should I repeat my steps on the production exchange server in the recover storage group? then merge into her original account? I am just concerned with doing recovery like that on the live server.... QUESTION 2 Is there any way I can extract her .PST from my recovery VM and then import into her outlook? On the Recovery VM: I restored the raw DB from my full backup repaired it with ESEUTIL then mounted in the recovery store. Was thinking I could just repeat and mount in the main store on the VM? Thanks for the suggestions.

    Read the article

  • kvm and qemu host: Is there a limit for max CPUs (Ubuntu 10.04)?

    - by Valentin
    Today we encountered a really strange behaviour on two identical kvm and qemu hosts. The host systems each have 4 x 10 Cores, which means that 40 physical cores are displayed as 80 within the operating system (Ubuntu Linux 10.04 64 Bit). We started a Windows 2003 32 Bit VM (1 CPU, 1 GB RAM, we changed those values multiple times) on one of the nodes and noticed that it took 15 minutes until the boot process began. During those 15 minutes, a black screen is shown and nothing happens. libvirt and the host system show that the qemu-kvm process for the guest is almost idling. stracing this process only shows some FUTEX entries, but nothing special. After those 15 minutes, the Windows VM suddenly starts booting and the Windows logo occurs. After a few seconds, the VM is ready to be used. The VM itself is very performant, so this is no performance issue. We tried to pin the CPUs with the virsh and taskset tools, but this only made things worse. When we boot the Windows VM with a Linux Live CD there is also a black screen for several minutes, but not as long as 15. When booting another VM on this host (Ubuntu 10.04) it also has the black screen problem, and also here the black screen is only shown for 2-3 minutes (instead of 15). So, summerinzing this: Each guest on each of those identical nodes suffers from idling a few minutes after being started. After a few minutes, the boot process suddenly starts. We have observed that the idling time happens right after the bios of the guest was initialized. One of our employees had the idea to limit the amount of CPUs with maxcpus=40 (because of 40 physical cores existing) within Grub (kernel parameter) and suddenly the "black-screen-idling"-behaviour disappeared. Searching the KVM and Qemu mailing lists, the internet, forums, serverfault and other various sites for known bugs etc. showed no useful results. Even asking in the dev IRC channels brought no new ideas. The people there recommend us to use CPU pinning, but as stated before it didn't help. My question is now: Is there a sort of limit of CPUs for a qemu or kvm host system? Browsing the source code of those two tools showed that KVM would send a warning if your host has more than 255 CPUs. But we are not even scratching on that limit. Some stuff about the host system: 3.0.0-20-server kvm 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4 kvm-pxe 5.4.4-7ubuntu2 qemu-kvm 0.14.0+noroms-0ubuntu4 qemu-common 0.14.0+noroms-0ubuntu4 libvirt 0.8.8-1ubuntu6 4 x Intel(R) Xeon(R) CPU E7-4870 @ 2.40GHz, 10 Cores

    Read the article

  • DELL DRAC & Ubuntu VPN Connection

    - by Mikunos
    I am trying to connect to a DELL DRAC card without success by Ubuntu VPN Connection Manager. I have these data: Protocol: PPTP SERVER IP PPTP: 1233.123.123.123 DRAC IP: 192.168.10.25 Subnet: 255.255.0.0 User: myuser Pass: mypass where have I to write these parameters? I have configured the PPTP connection using the graphical tool in Ubuntu 11.10 ... but in the /var/log/syslog I get these messages: Apr 15 11:33:15 shinet NetworkManager[1035]: <info> Starting VPN service 'pptp'... Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 18180 Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN service 'pptp' appeared; activating connections Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN plugin state changed: 3 Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN connection 'Connessione VPN 1' (Connect) reply received. Apr 15 11:33:15 shinet pppd[18182]: Plugin /usr/lib/pppd/2.4.5/nm-pptp-pppd-plugin.so loaded. Apr 15 11:33:15 shinet pppd[18182]: pppd 2.4.5 started by root, uid 0 Apr 15 11:33:15 shinet pppd[18182]: Using interface ppp0 Apr 15 11:33:15 shinet pppd[18182]: Connect: ppp0 <--> /dev/pts/1 Apr 15 11:33:15 shinet NetworkManager[1035]: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Apr 15 11:33:15 shinet NetworkManager[1035]: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. Apr 15 11:33:15 shinet pptp[18185]: nm-pptp-service-18180 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Apr 15 11:33:46 shinet pppd[18182]: LCP: timeout sending Config-Requests Apr 15 11:33:46 shinet pppd[18182]: Connection terminated. Apr 15 11:33:46 shinet avahi-daemon[1081]: Withdrawing workstation service for ppp0. Apr 15 11:33:46 shinet NetworkManager[1035]: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Apr 15 11:33:46 shinet NetworkManager[1035]: <warn> VPN plugin failed: 1 Apr 15 11:33:46 shinet pppd[18182]: Modem hangup Apr 15 11:33:46 shinet NetworkManager[1035]: <warn> VPN plugin failed: 1 Apr 15 11:33:51 shinet pppd[18182]: Exit. Apr 15 11:33:51 shinet NetworkManager[1035]: <warn> VPN plugin failed: 1 Apr 15 11:33:51 shinet NetworkManager[1035]: <info> VPN plugin state changed: 6 Apr 15 11:33:51 shinet NetworkManager[1035]: <info> VPN plugin state change reason: 0 Apr 15 11:33:51 shinet NetworkManager[1035]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active. Apr 15 11:33:51 shinet NetworkManager[1035]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Apr 15 11:33:57 shinet NetworkManager[1035]: <info> VPN service 'pptp' disappeared Thanks

    Read the article

  • Windows Server 2003 W3SVC Failing, Brute Force attack possibly the cause

    - by Roaders
    This week my website has disappeared twice for no apparent reason. I logged onto my server (Windows Server 2003 Service Pack 2) and restarted the World Web Publishing service, website still down. I tried restarting a few other services like DNS and Cold Fusion and the website was still down. In the end I restarted the server and the website reappeared. Last night the website went down again. This time I logged on and looked at the event log. SCARY STUFF! There were hundreds of these: Event Type: Information Event Source: TermService Event Category: None Event ID: 1012 Date: 30/01/2012 Time: 15:25:12 User: N/A Computer: SERVER51338 Description: Remote session from client name a exceeded the maximum allowed failed logon attempts. The session was forcibly terminated. At a frequency of around 3 -5 a minute. At about the time my website died there was one of these: Event Type: Information Event Source: W3SVC Event Category: None Event ID: 1074 Date: 30/01/2012 Time: 19:36:14 User: N/A Computer: SERVER51338 Description: A worker process with process id of '6308' serving application pool 'DefaultAppPool' has requested a recycle because the worker process reached its allowed processing time limit. Which is obviously what killed the web service. There were then a few of these: Event Type: Error Event Source: TermDD Event Category: None Event ID: 50 Date: 30/01/2012 Time: 20:32:51 User: N/A Computer: SERVER51338 Description: The RDP protocol component "DATA ENCRYPTION" detected an error in the protocol stream and has disconnected the client. Data: 0000: 00 00 04 00 02 00 52 00 ......R. 0008: 00 00 00 00 32 00 0a c0 ....2..À 0010: 00 00 00 00 32 00 0a c0 ....2..À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 92 01 00 00 ... With no more of the first error type. I am concerned that someone is trying to brute force their way into my server. I have disabled all the accounts apart from the IIS ones and Administrator (which I have renamed). I have also changed the password to an even more secure one. I don't know why this brute force attack caused the webservice to stop and I don't know why restarting the service didn't fix the problem. What should I do to make sure my server is secure and what should I do to make sure the webserver doesn't go down any more? Thanks.

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • nm-applet gone?

    - by welp
    nm-applet seems to have disappeared from my system. I am running 12.10. Here's what I get when I check my package manager logs: ? ~ grep network-manager /var/log/dpkg.log 2012-10-06 10:37:08 upgrade network-manager-gnome:amd64 0.9.6.2-0ubuntu5 0.9.6.2-0ubuntu6 2012-10-06 10:37:08 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:09 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:09 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:37:09 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 configure network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 remove network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status config-files network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status config-files network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 install network-manager-gnome:amd64 0.9.6.2-0ubuntu6 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:06 configure network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:06 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 ? ~ Unfortunately, I cannot find network-manager-applet package at all: ? ~ apt-cache search network-manager-applet ? ~ Here are the contents of /etc/apt/sources.list: ? ~ cat /etc/apt/sources.list # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gb.archive.ubuntu.com/ubuntu/ quantal main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal universe deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal universe deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates universe deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal multiverse deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal-backports main restricted universe multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu quantal-security main restricted deb-src http://security.ubuntu.com/ubuntu quantal-security main restricted deb http://security.ubuntu.com/ubuntu quantal-security universe deb-src http://security.ubuntu.com/ubuntu quantal-security universe deb http://security.ubuntu.com/ubuntu quantal-security multiverse deb-src http://security.ubuntu.com/ubuntu quantal-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu quantal main deb-src http://extras.ubuntu.com/ubuntu quantal main ? ~ Right now, I can't think of anything else. Happy to provide more info upon request.

    Read the article

  • Basic shadow mapping fails on NVIDIA card?

    - by James
    Recently I switched from an AMD Radeon HD 6870 card to an (MSI) NVIDIA GTX 670 for performance reasons. I found however that my implementation of shadow mapping in all my applications failed. In a very simple shadow POC project the problem appears to be that the scene being drawn never results in a draw to the depth map and as a result the entire depth map is just infinity, 1.0 (Reading directly from the depth component after draw (glReadPixels) shows every pixel is infinity (1.0), replacing the depth comparison in the shader with a comparison of the depth from the shadow map with 1.0 shadows the entire scene, and writing random values to the depth map and then not calling glClear(GL_DEPTH_BUFFER_BIT) results in a random noisy pattern on the scene elements - from which we can infer that the uploading of the depth texture and comparison within the shader are functioning perfectly.) Since the problem appears almost certainly to be in the depth render, this is the code for that: const int s_res = 1024; GLuint shadowMap_tex; GLuint shadowMap_prog; GLint sm_attr_coord3d; GLint sm_uniform_mvp; GLuint fbo_handle; GLuint renderBuffer; bool isMappingShad = false; //The scene consists of a plane with box above it GLfloat scene[] = { -10.0, 0.0, -10.0, 0.5, 0.0, 10.0, 0.0, -10.0, 1.0, 0.0, 10.0, 0.0, 10.0, 1.0, 0.5, -10.0, 0.0, -10.0, 0.5, 0.0, -10.0, 0.0, 10.0, 0.5, 0.5, 10.0, 0.0, 10.0, 1.0, 0.5, ... }; //Initialize the stuff used by the shadow map generator int initShadowMap() { //Initialize the shadowMap shader program if (create_program("shadow.v.glsl", "shadow.f.glsl", shadowMap_prog) != 1) return -1; const char* attribute_name = "coord3d"; sm_attr_coord3d = glGetAttribLocation(shadowMap_prog, attribute_name); if (sm_attr_coord3d == -1) { fprintf(stderr, "Could not bind attribute %s\n", attribute_name); return 0; } const char* uniform_name = "mvp"; sm_uniform_mvp = glGetUniformLocation(shadowMap_prog, uniform_name); if (sm_uniform_mvp == -1) { fprintf(stderr, "Could not bind uniform %s\n", uniform_name); return 0; } //Create a framebuffer glGenFramebuffers(1, &fbo_handle); glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); //Create render buffer glGenRenderbuffers(1, &renderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); //Setup the shadow texture glGenTextures(1, &shadowMap_tex); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, s_res, s_res, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); return 0; } //Delete stuff void dnitShadowMap() { //Delete everything glDeleteFramebuffers(1, &fbo_handle); glDeleteRenderbuffers(1, &renderBuffer); glDeleteTextures(1, &shadowMap_tex); glDeleteProgram(shadowMap_prog); } int loadSMap() { //Bind MVP stuff glm::mat4 view = glm::lookAt(glm::vec3(10.0, 10.0, 5.0), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0)); glm::mat4 projection = glm::ortho<float>(-10,10,-8,8,-10,40); glm::mat4 mvp = projection * view; glm::mat4 biasMatrix( 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); glm::mat4 lsMVP = biasMatrix * mvp; //Upload light source matrix to the main shader programs glUniformMatrix4fv(uniform_ls_mvp, 1, GL_FALSE, glm::value_ptr(lsMVP)); glUseProgram(shadowMap_prog); glUniformMatrix4fv(sm_uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); //Draw to the framebuffer (with depth buffer only draw) glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowMap_tex, 0); glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); GLenum result = glCheckFramebufferStatus(GL_FRAMEBUFFER); if (GL_FRAMEBUFFER_COMPLETE != result) { printf("ERROR: Framebuffer is not complete.\n"); return -1; } //Draw shadow scene printf("Creating shadow buffers..\n"); int ticks = SDL_GetTicks(); glClear(GL_DEPTH_BUFFER_BIT); //Wipe the depth buffer glViewport(0, 0, s_res, s_res); isMappingShad = true; //DRAW glEnableVertexAttribArray(sm_attr_coord3d); glVertexAttribPointer(sm_attr_coord3d, 3, GL_FLOAT, GL_FALSE, 5*4, scene); glDrawArrays(GL_TRIANGLES, 0, 14*3); glDisableVertexAttribArray(sm_attr_coord3d); isMappingShad = false; glBindFramebuffer(GL_FRAMEBUFFER, 0); printf("Render Sbuf in %dms (GLerr: %d)\n", SDL_GetTicks() - ticks, glGetError()); return 0; } This is the full code for the POC shadow mapping project (C++) (Requires SDL 1.2, SDL-image 1.2, GLEW (1.5) and GLM development headers.) initShadowMap is called, followed by loadSMap, the scene is drawn from the camera POV and then dnitShadowMap is called. I followed this tutorial originally (Along with another more comprehensive tutorial which has disappeared as this guy re-configured his site but used to be here (404).) I've ensured that the scene is visible (as can be seen within the full project) to the light source (which uses an orthogonal projection matrix.) Shader utilities function fine in non-shadow-mapped projects. I should also note that at no point is the GL error state set. What am I doing wrong here and why did this not cause problems on my AMD card? (System: Ubuntu 12.04, Linux 3.2.0-49-generic, 64 bit, with the nvidia-experimental-310 driver package. All other games are functioning fine so it's most likely not a card/driver issue.)

    Read the article

  • LIMBO fails on startup with Internal errors - invalid parameters received

    - by user61262
    I installed LIMBO from the Humble Bundle V and as far as I am aware, this has wine packaged with it (I also installed the latest from the repo's in case is was because of that). However the game doesn't even start and fails with the message: Wine Program Error Internal errors - invalid parameters received. Is there a way to log the error or does anyone know why this happens? This question was asked previously but it seems to have disappeared. My Graphics cards is a Geforece GT 250 Cheers ice. [edit: Wine outputs the following error: wine /opt/limbo/support/limbo/drive_c/Program\ Files/limbo/limbo.exe fixme:system:SystemParametersInfoW Unimplemented action: 59 (SPI_SETSTICKYKEYS) fixme:system:SystemParametersInfoW Unimplemented action: 53 (SPI_SETTOGGLEKEYS) fixme:system:SystemParametersInfoW Unimplemented action: 51 (SPI_SETFILTERKEYS) fixme:win:EnumDisplayDevicesW ((null),0,0x32f580,0x00000000), stub! err:x11settings:X11DRV_ChangeDisplaySettingsEx No matching mode found 1920x1080x32 @60! (XRandR) err:xrandr:X11DRV_XRandR_SetCurrentMode Resolution change not successful -- perhaps display has changed? wine: Unhandled page fault on read access to 0x00000000 at address 0x48213e (thread 0009), starting debugger... The debugger has the following output: Unhandled exception: page fault on read access to 0x00000000 in 32-bit code (0x0048213e). Register dump: CS:0073 SS:007b DS:007b ES:007b FS:0033 GS:003b EIP:0048213e ESP:0032f9f4 EBP:0037cdd0 EFLAGS:00010202( R- -- I - - - ) EAX:00000000 EBX:00000000 ECX:00000000 EDX:0037cf4c ESI:0037cda8 EDI:0037cdcc Stack dump: 0x0032f9f4: 0037cda8 0034c708 7bc35120 00000000 0x0032fa04: 0037cda8 0032fa38 0079fc58 00000000 0x0032fa14: 0048b7d4 00000001 0037cdcc 00000001 0x0032fa24: 00000780 00000438 0034c620 00000000 0x0032fa34: 0034c708 0032fa78 007a04e2 00000002 0x0032fa44: 0048c4bc 00000780 00000438 0037cda8 Backtrace: =>0 0x0048213e in limbo (+0x8213e) (0x0037cdd0) 0x0048213e: movl 0x0(%eax),%edx Modules: Module Address Debug info Name (103 modules) PE 400000- 926000 Export limbo PE 10000000-101ff000 Deferred d3dx9_43 ELF 79bb3000-7b800000 Deferred libnvidia-glcore.so.295.53 ELF 7b800000-7ba15000 Deferred kernel32<elf> \-PE 7b810000-7ba15000 \ kernel32 ELF 7bc00000-7bcc3000 Deferred ntdll<elf> \-PE 7bc10000-7bcc3000 \ ntdll ELF 7bf00000-7bf04000 Deferred <wine-loader> ELF 7d7e0000-7d7e4000 Deferred libnvidia-tls.so.295.53 ELF 7d7e4000-7d8bc000 Deferred libgl.so.1 ELF 7d9d0000-7d9d9000 Deferred librt.so.1 ELF 7d9d9000-7d9de000 Deferred libgpg-error.so.0 ELF 7d9de000-7d9f6000 Deferred libresolv.so.2 ELF 7d9f6000-7d9fa000 Deferred libkeyutils.so.1 ELF 7d9fa000-7da43000 Deferred libdbus-1.so.3 ELF 7da43000-7da55000 Deferred libp11-kit.so.0 ELF 7da55000-7dada000 Deferred libgcrypt.so.11 ELF 7dada000-7daec000 Deferred libtasn1.so.3 ELF 7daec000-7daf5000 Deferred libkrb5support.so.0 ELF 7daf5000-7dafa000 Deferred libcom_err.so.2 ELF 7dafa000-7db22000 Deferred libk5crypto.so.3 ELF 7db22000-7dbf1000 Deferred libkrb5.so.3 ELF 7dbf1000-7dc03000 Deferred libavahi-client.so.3 ELF 7dc03000-7dc11000 Deferred libavahi-common.so.3 ELF 7dc11000-7dcd5000 Deferred libgnutls.so.26 ELF 7dcd5000-7dd13000 Deferred libgssapi_krb5.so.2 ELF 7dd13000-7dd66000 Deferred libcups.so.2 ELF 7dd94000-7ddc8000 Deferred uxtheme<elf> \-PE 7dda0000-7ddc8000 \ uxtheme ELF 7ddc8000-7ddd3000 Deferred libxcursor.so.1 ELF 7ddd4000-7dde7000 Deferred gnome-keyring-pkcs11.so ELF 7de47000-7de4d000 Deferred libxfixes.so.3 ELF 7deac000-7ded6000 Deferred libexpat.so.1 ELF 7ded6000-7df0a000 Deferred libfontconfig.so.1 ELF 7df0a000-7df1a000 Deferred libxi.so.6 ELF 7df1a000-7df1e000 Deferred libxcomposite.so.1 ELF 7df1e000-7df27000 Deferred libxrandr.so.2 ELF 7df27000-7df31000 Deferred libxrender.so.1 ELF 7df31000-7df37000 Deferred libxxf86vm.so.1 ELF 7df37000-7df3b000 Deferred libxinerama.so.1 ELF 7df3b000-7df5d000 Deferred imm32<elf> \-PE 7df40000-7df5d000 \ imm32 ELF 7df5d000-7df64000 Deferred libxdmcp.so.6 ELF 7df64000-7df85000 Deferred libxcb.so.1 ELF 7df85000-7df9f000 Deferred libice.so.6 ELF 7df9f000-7e0d3000 Deferred libx11.so.6 ELF 7e0d3000-7e0e5000 Deferred libxext.so.6 ELF 7e0e5000-7e178000 Deferred winex11<elf> \-PE 7e0f0000-7e178000 \ winex11 ELF 7e178000-7e18e000 Deferred libz.so.1 ELF 7e18e000-7e228000 Deferred libfreetype.so.6 ELF 7e228000-7e247000 Deferred libtinfo.so.5 ELF 7e247000-7e269000 Deferred libncurses.so.5 ELF 7e27d000-7e292000 Deferred xinput1_3<elf> \-PE 7e280000-7e292000 \ xinput1_3 ELF 7e292000-7e2a6000 Deferred psapi<elf> \-PE 7e2a0000-7e2a6000 \ psapi ELF 7e2a6000-7e304000 Deferred dbghelp<elf> \-PE 7e2b0000-7e304000 \ dbghelp ELF 7e304000-7e391000 Deferred msvcrt<elf> \-PE 7e320000-7e391000 \ msvcrt ELF 7e391000-7e4c5000 Deferred wined3d<elf> \-PE 7e3a0000-7e4c5000 \ wined3d ELF 7e4c5000-7e4fe000 Deferred d3d9<elf> \-PE 7e4d0000-7e4fe000 \ d3d9 ELF 7e4fe000-7e573000 Deferred rpcrt4<elf> \-PE 7e510000-7e573000 \ rpcrt4 ELF 7e573000-7e67b000 Deferred ole32<elf> \-PE 7e590000-7e67b000 \ ole32 ELF 7e67b000-7e697000 Deferred dinput8<elf> \-PE 7e680000-7e697000 \ dinput8 ELF 7e697000-7e6d1000 Deferred winspool<elf> \-PE 7e6a0000-7e6d1000 \ winspool ELF 7e6d1000-7e7c9000 Deferred comctl32<elf> \-PE 7e6e0000-7e7c9000 \ comctl32 ELF 7e7c9000-7e833000 Deferred shlwapi<elf> \-PE 7e7e0000-7e833000 \ shlwapi ELF 7e833000-7ea44000 Deferred shell32<elf> \-PE 7e840000-7ea44000 \ shell32 ELF 7ea44000-7eb23000 Deferred comdlg32<elf> \-PE 7ea50000-7eb23000 \ comdlg32 ELF 7eb23000-7eb3c000 Deferred version<elf> \-PE 7eb30000-7eb3c000 \ version ELF 7eb3c000-7eb9c000 Deferred advapi32<elf> \-PE 7eb50000-7eb9c000 \ advapi32 ELF 7eb9c000-7ec59000 Deferred gdi32<elf> \-PE 7ebb0000-7ec59000 \ gdi32 ELF 7ec59000-7ed99000 Deferred user32<elf> \-PE 7ec70000-7ed99000 \ user32 ELF 7ef99000-7efa6000 Deferred libnss_files.so.2 ELF 7efa6000-7efc0000 Deferred libnsl.so.1 ELF 7efc0000-7efec000 Deferred libm.so.6 ELF 7efee000-7eff4000 Deferred libuuid.so.1 ELF 7eff4000-7f000000 Deferred libnss_nis.so.2 ELF b7411000-b7415000 Deferred libxau.so.6 ELF b7415000-b741e000 Deferred libnss_compat.so.2 ELF b741f000-b7424000 Deferred libdl.so.2 ELF b7424000-b75ca000 Deferred libc.so.6 ELF b75cb000-b75e6000 Deferred libpthread.so.0 ELF b75e9000-b75f2000 Deferred libsm.so.6 ELF b75fa000-b773c000 Dwarf libwine.so.1 ELF b773e000-b7760000 Deferred ld-linux.so.2 ELF b7760000-b7761000 Deferred [vdso].so Threads: process tid prio (all id:s are in hex) 00000008 (D) Z:\opt\limbo\support\limbo\drive_c\Program Files\limbo\limbo.exe 00000009 0 <== 0000000e services.exe 00000020 0 0000001f 0 00000019 0 00000018 0 00000017 0 00000015 0 00000010 0 0000000f 0 00000012 winedevice.exe 0000001d 0 0000001a 0 00000014 0 00000013 0 0000001b plugplay.exe 00000021 0 0000001e 0 0000001c 0 00000022 explorer.exe 00000023 0 System information: Wine build: wine-1.4 Platform: i386 Host system: Linux Host version: 3.2.0-24-generic-pae

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2

    - by pinaldave
    Social media has created an Always Connected World for us. Recently I enrolled myself to learn new technologies as a student. I had decided to focus on learning and decided not to stay connected on the internet while I am in the learning session. On the second day of the event after the learning was over, I noticed lots of notification from my friend on my various social media handle. He had connected with me on Twitter, Facebook, Google+, LinkedIn, YouTube as well SMS, WhatsApp on the phone, Skype messages and not to forget with a few emails. I right away called him up. The problem was very unique – let us hear the problem in his own words. “Pinal – we are in big trouble we are not able to figure out what is going on. Our product details table is continuously loosing rows. Lots of rows have disappeared since morning and we are unable to find why the rows are getting deleted. We have made sure that there is no DELETE command executed on the table as well. The matter of the fact, we have removed every single place the code which is referencing the table. We have done so many crazy things out of desperation but no luck. The rows are continuously deleted in a random pattern. Do you think we have problems with intrusion or virus?” After describing the problems he had pasted few rants about why I was not available during the day. I think it will be not smart to post those exact words here (due to many reasons). Well, my immediate reaction was to get online with him. His problem was unique to him and his team was all out to fix the issue since morning. As he said he has done quite a lot out in desperation. I started asking questions from audit, policy management and profiling the data. Very soon I realize that I think this problem was not as advanced as it looked. There was no intrusion, SQL Injection or virus issue. Well, long story short first - It was a very simple issue of foreign key created with ON UPDATE CASCADE and ON DELETE CASCADE.  CASCADE allows deletions or updates of key values to cascade through the tables defined to have foreign key relationships that can be traced back to the table on which the modification is performed. ON DELETE CASCADE specifies that if an attempt is made to delete a row with a key referenced by foreign keys in existing rows in other tables, all rows containing those foreign keys are also deleted. ON UPDATE CASCADE specifies that if an attempt is made to update a key value in a row, where the key value is referenced by foreign keys in existing rows in other tables, all of the foreign key values are also updated to the new value specified for the key. (Reference: BOL) In simple words – due to ON DELETE CASCASE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. In my friend’s case, they had two tables, Products and ProductDetails. They had created foreign key referential integrity of the product id between the table. Now the as fall was up they were updating their catalogue. When they were updating the catalogue they were deleting products which are no more available. As the changes were cascading the corresponding rows were also deleted from another table. This is CORRECT. The matter of the fact, there is no error or anything and SQL Server is behaving how it should be behaving. The problem was in the understanding and inappropriate implementations of business logic.  What they needed was Product Master Table, Current Product Catalogue, and Product Order Details History tables. However, they were using only two tables and without proper understanding the relation between them was build using foreign keys. If there were only two table, they should have used soft delete which will not actually delete the record but just hide it from the original product table. This workaround could have got them saved from cascading delete issues. I will be writing a detailed post on the design implications etc in my future post as in above three lines I cannot cover every issue related to designing and it is also not the scope of the blog post. More about designing in future blog posts. Once they learn their mistake, they were happy as there was no intrusion but trust me sometime we are our own enemy and this is a great example of it. In tomorrow’s blog post we will go over their code and workarounds. Feel free to share your opinions, experiences and comments. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • ASP.NET site sometimes freezing up and/or showing odd text at top of the page while loading, on load

    - by MGOwen
    I have various servers (dev, 2 x test, 2 x prod) running the same asp.net site. The test and prod servers are in load-balanced pairs (prod1 with prod2, and test1 with test2). The test server pair is exhibiting some kind of (super) slowdown or freezing during about one in ten page loads. Sometimes a line of text appears at the very top of the page which looks something like: 00 OK Date: Thu, 01 Apr 2010 01:50:09 GMT Server: Microsoft-IIS/6.0 X-Powered_By: ASP.NET X-AspNet-Version:2.0.50727 Cache-Control:private Content-Type:text/html; charset=ut (the beginning and end are "cut off".) Has anyone seen anything like this before? Any idea what it means or what's causing it? Edit: I often see this too when clicking something - it comes up as red text on a yellow page: XML Parsing Error: not well-formed Location: http://203.111.46.211/3DSS/CompanyCompliance.aspx?cid=14 Line Number 1, Column 24:2mMTehON9OUNKySVaJ3ROpN" / -----------------------^ If I go back and click again, it works (I see the page I clicked on, not the above error message). Update: ...And, instead of the page loading, I sometimes just get a white screen with text like this in black (looks a lot like the above text): HTTP/1.1 302 Found Date: Wed, 21 Apr 2010 04:53:39 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Location: /3DSS/EditSections.aspx?id=3&siteId=56&sectionId=46 Set-Cookie: .3DSS=A6CAC223D0F2517D77C7C68EEF069ABA85E9392E93417FFA4209E2621B8DCE38174AD699C9F0221D30D49E108CAB8A828408CF214549A949501DAFAF59F080375A50162361E4AA94E08874BF0945B2EF; path=/; HttpOnly Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 184 object moved here Where "here" is a link that points to a URL just like the one I'm requesting, except with an extra folder in it, meaning something like: http://123.1.2.3/MySite//MySite/Page.aspx?option=1 instead of: http://123.1.2.3/MySite/Page.aspx?option=1 Update: A colleague of mine found some info saying it might be because the test servers are running iis in 64 bit (64bit win 2003) (prod servers are 32 bit win 2003). So we tried telling IIS to use 32 bit: **cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 1 %SYSTEMROOT%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i ** (from this MS support page) But iis stopped working altogether (got "server unavailable" on a white page instead of web sites). Reversing the above (see the link) didn't work at first either. The ASP.NET tab disappeared from our IIS web site properties and we had to mess around for an hour uninstalling (aspnet_regiis.exe -u) and reinstalling 32 bit ASP.NET and adding Default.aspx manually back into default documents. We'll probably try again in a few days, if anyone has anything to add in the meantime, please do.

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Changing the itemsSource of a treeview makes it's children invisible, when they were already display

    - by Marnix Kraus
    I found some strange problem in WPF, using the itemsSource of a treeview. I hope I can make this specific problem clear for you. First; a story. There is a treeview. It has a list with treeviewitems as itemsSource. This list is called _roots. There is another list, called _leafs. For as in a treeview, the _roots contain the _leafs in some hierarchical way. For example: <TreeviewItem Header="Jungle"> <TreeviewItem> <SpecialTreeviewItem Header="Monkey"/> <SpecialTreeviewItem Header="Apple"/> </TreeviewItem> </TreeviewItem> Now I am trying to switch between these two lists as itemsSource. It seemed to work fine, but it doesn't: When the Jungle-item is un-expanded, and I change the itemsSource to _leafs, and change it back again to _roots, everything works fine and all items can be expanded and showed. But when the Jungle-item is expanded (and the special items are already visible) and I change it to the _leafs itemsSource, and then change the itemsSource back to _roots, all special items have disappeared!! Also, when I do the same as case 2, but first un-expand the Jungle-item again, the special items also disappear. I did a lot of debugging, before posting this question here and come to the following conclusion: Printing on the event: visibility changed, the visibility is set to false for all items that were already visible (that is, when _roots become visible, the special items become invisible (because they were already visible)) So, IsVisible is false for the items, but Visibility = Visible. Which is a bit strange. The problem seems to depend on the use of the _roots list, which in a certain way contain the _leafs. When I change the itemsSource to different lists with special items in it, everything works fine. The hierarchical structure of the _roots make this thing broken. I hope that this is a complete overview of my problem. Help would be appreciated.

    Read the article

  • Using GET Method to Maintain Variables After Logging In

    - by John
    Hello, I am using a login system. When I navigate to comments/index.php without logging in, some variables get passed along using the GET method just fine. Then, if I log in while I am on this page, these variables disappear. The variables that disappear are $submission, $submissionid, and $url. I thought I could use the GET method to keep them live after logging in by appending ?submission='.$submission.'&submissionid='.$submissionid.'&url='.$url.' to the URL of the login form action as seen below. But the variables still disappeared after I made this addition. The relevant code I am trying to use is below. Any idea what I can do to make it do what I want? Thanks in advance, John In comments/index.php: require_once "header.php"; include "login.php"; include "comments.php"; In login.php: if (!isLoggedIn()) { if (isset($_POST['cmdlogin'])) { if (checkLogin($_POST['username'], $_POST['password'])) { show_userbox(); } else { echo "Incorrect Login information !"; show_loginform(); } } else { show_loginform(); } } else { show_userbox(); } In comments.php: $url = mysql_real_escape_string($_GET['url']); echo '<div class="subcommenttitle"><a href="http://www.'.$url.'">'.$submission.'</a></div>'; $submission = mysql_real_escape_string($_GET['submission']); $submissionid = mysql_real_escape_string($_GET['submissionid']); The login function: function show_loginform($disabled = false) { echo '<form name="login-form" id="login-form" method="post" action="./index.php?submission='.$submission.'&submissionid='.$submissionid.'&url='.$url.'"> <div class="usernameformtext"><label title="Username">Username: </label></div> <div class="usernameformfield"><input tabindex="1" accesskey="u" name="username" type="text" maxlength="30" id="username" /></div> <div class="passwordformtext"><label title="Password">Password: </label></div> <div class="passwordformfield"><input tabindex="2" accesskey="p" name="password" type="password" maxlength="15" id="password" /></div> <div class="registertext"><a href="http://www...com/sandbox/register.php" title="Register">Register</a></div> <div class="lostpasswordtext"><a href="http://www...com/sandbox/lostpassword.php" title="Lost Password">Lost password?</a></div> <p class="loginbutton"><input tabindex="3" accesskey="l" type="submit" name="cmdlogin" value="Login" '; if ($disabled == true) { echo 'disabled="disabled"'; } echo ' /></p></form>'; }

    Read the article

  • Windows API Programing....

    - by vs4vijay
    Hello There... Its me Vijay.. I m Trying to make a CrossHair(some kind of cursor) On The Screen while running a Game (Counter Strike)... so i did this... ############################# #include<iostream.h> #include<windows.h> #include<conio.h> #include<dos.h> #include<stdlib.h> #include<process.h> #include <time.h> int main() { HANDLE hl = OpenProcess(PROCESS_ALL_ACCESS,TRUE,pid); // Here pid is the process ID of the Game... HDC hDC = GetDC(NULL); //Here i pass NULL for Entire Screen... HBRUSH hb=CreateSolidBrush(RGB(0,255,255)); SelectObject(hDC,hb); POINT p; while(!kbhit()) { int x=1360/2,y=768/2; MoveToEx(hDC,x-20,y,&p); LineTo(hDC,x+20,y); SetPixel(hDC,x,y,RGB(255,0,0)); SetPixel(hDC,x-1,y-1,RGB(255,0,0)); SetPixel(hDC,x-1,y+1,RGB(255,0,0)); SetPixel(hDC,x+1,y+1,RGB(255,0,0)); SetPixel(hDC,x+1,y-1,RGB(255,0,0)); MoveToEx(hDC,x,y-20,&p); LineTo(hDC,x,y+20); } cin.get(); return 0; } #################################### it works fine....at desktop i see crosshair...but my problem is that when i run game...the cross here got disappeared.... so i think i did not handle the process of game... so i pass the HANDLE to the GetDC(hl)... But GetDC take only HWND(Handle To Window)... so i typecast it like this... HWND hl = (HWND)OpenProcess(PROCESS_ALL_ACCESS,TRUE,pid); and passed hl to the GetDC(hl)... but it doesnt work...Whats wrong with the code... plz tell me how do i make a simple shape at the screen on a process or game... PS : (My Compiler Is DevCPP and OS WinXP SP3....)

    Read the article

< Previous Page | 12 13 14 15 16 17 18  | Next Page >