Search Results

Search found 20970 results on 839 pages for 'real mode'.

Page 47/839 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Real life usage of the projective plane theory

    - by Elazar Leibovich
    I'm learning about the theory of the projective plane. Very generally speaking, it is an extension of the plane, which includes additional points which are defined as the intersection points of two parallel lines. In the projective plane, every two lines have an interesection point. Whether they're parallel or not. Every point in the projective plane can be represented by three numbers (you actually need less than that, but nevemind now). Is there any real life application which uses the projective plane? I can think that, for instance, a software which needs to find the intersections of a line, can benefit from always having an intersection point which might lead to simpler code, but is it really used?

    Read the article

  • Looking for a real-time IMAP notification of new Emails

    - by Emil
    I'm looking for a way to monitor a GMail inbox for new e-mails. However, I want to avoid checking every few minutes and I'm looking for some sort of real-time notification. I've noticed that Outlook (and other IMAP-supporting clients) instantly show when there is a new e-mail, but unfortunately all .NET IMAP libraries seem to lack this functionality. Does anyone know of an IMAP library that has this functionality? Or is there another way to be instantly notified of new message without doing some short-period polling?

    Read the article

  • measuring performance - using real clicks vs "ab" command

    - by shanyu
    I have a web site in closed beta, developed in Django, runs with Mysql on Debian. In the last few days, the main page has been showing a slowdown. For every ten clicks, one or two receives extremely slow response (10 secs or more), others are as fast as they used to be. When I was searching for the problem, I ran into this issue that I couldn't grasp: top command shows that when I request the main page, mysql shoots up to 90% - 100% cpu usage. I get the page just as the cpu use gets back to normal. So, I thought, it is db. Then I called ab with parameters -n 1000 -c 5, I got decent performance, about 100 pages per second, just as it was before the slowdown. I would imagine a worse performance as 10-20% of requests take 10 secs to load. Is this conflict between ab and "real" clicks normal, or am I using ab in a wrong configuration?

    Read the article

  • Real Time Monitoring System using .net [closed]

    - by sameer
    Need to know the right way of accomplishing this task. Task Description: We need to develop the application which display the dashboard where data from various SQL DB is fetched from different servers and displayed. Now this need to happen real time we can have refresh time say 5 min. Please refer the analogy so that it will help me to take the right path. Here are some details, Just for your reference. 1) How many servers? This application may support around Max 100 servers. 2) This Application will be used by Administrator to find the health of the SQL Server DB. 3) This will be build by 2 developers in approx 3 months time. 4) Refresh rate will be configurable as a Administrator option. Regards, Sameer

    Read the article

  • XNA Music mixing real-time

    - by Adam L. S.
    I've created a "format" to store segments of music (prelude part, repeated part, ending part) and time information for these segments (offset, scored length) so I can mix it up in real-time as if it were one piece of music, while repeating the repeated part (optionally) indefinitely. This way, the segments can store decay where the next segment is played, while the previous one is finished. (I've created a player for this in Java, and used the Clip class.) I wanted this format, so I can provide a finite length music (for a jukebox feature), while I play infinite length music in-games. However, when I wanted to code a class in XNA that manages this "format" I've noticed, that there is no obvious way to play "Songs" simultaneously/overlapped. How can I do this/what is the best practice, not leaving the XNA framework? (I don't want to create infinite play-lists.)

    Read the article

  • Real thing about "->" and "."

    - by fsdfa
    I always wanted to know what is the real thing difference of how the compiler see a pointer to a struct (in C suppose) and a struct itself. struct person p; struct person *pp; pp->age, I always imagine that the compiler does: "value of pp + offset of atribute "age" in the struct". But what it does with person.p? It would be almost the same. For me "the programmer", p is not a memory address, its like "the structure itself", but of course this is not how the compiler deal with it. My guess is it's more of a syntactic thing, and the compiler always does (&p)->age. I'm correct?

    Read the article

  • Does iOS 4 Have "Real" Multitasking?

    - by pkulak
    Ever since the first beta came out I've been trying to find out if "real" multitasking is possible. I.E., can you put a program in the background and have it hang on to a network connection indefinitely? I'm thinking about IM or IRC apps, for example. I've compiled an app myself on OS 4, and without changing a thing it appeared to stay running in the background, but for all I know it was just suspended to memory. The docs say the best you can do is request up to 10 minutes, but in the developer presentation they showed off Skype sitting in the background and then notifying the user that a call was coming in. Does anyone know for sure how this is all going to work?

    Read the article

  • Reload table view in real time

    - by Lucas Vallim da Costa
    The app receives a UILocalNotification, but if the user is at the UITableViewController at the fire time, the table view (containing the scheduled notifications) does not reload. The user has to get out of that view and load the view again so that the cells are loaded and, as the notification was already fired, it will not be displayed on any cell of that table view. Problem is: If the user touches the specific tableView cell that contained the notification that just fired, the app crashes, cause the notification is not there anymore. I've implemented the - (void)reloadData in every place possible, and it still doesn't load in real time. What would be a better solution for this? Other detail, how can I push a specific view after the notification is displayed (when the user slides the app icon when the phone is locked)? Any help will be truly appreciated, since theres are the last details remaining to publish my first App.

    Read the article

  • Real time App with Facebook

    - by Casebash
    Does Facebook provide access to any real time APIs so that you can respond to events as soon as they happen? If not, what alternatives are there and what are their limitations? For example, if I use polling instead, will they limit my api calls? And if I try using RSS feeds, about how much delay can I expect? Or maybe it would be possible to receive and process email notifications (if I could convince a user to forward mail to another email address), as they seem to be dispatched pretty promptly?

    Read the article

  • Application file (Real world example)

    - by aalhamad
    Looking for guidelines to create application file. For example I have an application that store user input into a file (Textbox, DataGrid, ListBox etc). I'm looking for WPF-C# implementation. I would like to have the following: If user edit a any form(Textbox, etc) an asterisk is displayed to the window title. When window is closed and asterisk is still there, a promote "Would you like to save changes" appears. If then saved the asterisk disappear. What do real applications use to create their application file? (Note: I'm not looking for database saving or SQL) I'm just looking for hints and guidelines. Thank you.

    Read the article

  • OpenVPN on ec2 bridged mode connects but no Ping, DNS or forwarding

    - by michael
    I am trying to use OpenVPN to access the internet over a secure connection. I have openVPN configured and running on Amazon EC2 in bridge mode with client certs. I can successfully connect from the client, but I cannot get access to the internet or ping anything from the client I checked the following and everything seems to shows a successful connection between the vpn client/server and UDP traffic on 1194 [server] sudo tcpdump -i eth0 udp port 1194 (shows UDP traffic after establishing connection) [server] sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] sudo iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- ip-W-X-Y-0.us-west-1.compute.internal/24 anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] openvpn.log Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 [localhost] Inactivity timeout (--ping-restart), restarting Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 SIGUSR1[soft,ping-restart] received, client-instance restarting Wed Oct 19 03:41:31 2011 MULTI: multi_create_instance called Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Re-using SSL/TLS context Wed Oct 19 03:41:31 2011 a.b.c.d:57889 LZO compression initialized Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Control Channel MTU parms [ L:1574 D:166 EF:66 EB:0 ET:0 EL:0 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Local Options hash (VER=V4): '360696c5' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Expected Remote Options hash (VER=V4): '13a273ba' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 TLS: Initial packet from [AF_INET]a.b.c.d:57889, sid=dd886604 ab6ebb38 Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=EXAMPLE_CA/[email protected] Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=localhost/[email protected] Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Oct 19 03:41:37 2011 a.b.c.d:57889 [localhost] Peer Connection Initiated with [AF_INET]a.b.c.d:57889 Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 PUSH: Received control message: 'PUSH_REQUEST' Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 SENT CONTROL [localhost]: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route-gateway W.X.Y.Z,ping 10,ping-restart 120,ifconfig W.X.Y.Z 255.255.255.0' (status=1) Wed Oct 19 03:41:40 2011 localhost/a.b.c.d:57889 MULTI: Learn: (IPV6) -> localhost/a.b.c.d:57889 [client] tracert google.com Tracing route to google.com [74.125.71.104] over a maximum of 30 hops: 1 347 ms 349 ms 348 ms PC [w.X.Y.Z] 2 * * * Request timed out. I can also successfully ping the server IP address from the client, and ping google.com from an SSH shell on the server. What am I doing wrong? Here is my config (Note: W.X.Y.Z == amazon EC2 private ipaddress) bridge config on br0 ifconfig eth0 0.0.0.0 promisc up brctl addbr br0 brctl addif br0 eth0 ifconfig br0 W.X.Y.X netmask 255.255.255.0 broadcast W.X.Y.255 up route add default gw W.X.Y.1 br0 /etc/openvpn/server.conf (from https://help.ubuntu.com/10.04/serverguide/C/openvpn.html) local W.X.Y.Z dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;server W.X.Y.0 255.255.255.0 server-bridge W.X.Y.Z 255.255.255.0 W.X.Y.105 W.X.Y.200 ;push "route W.X.Y.0 255.255.255.0" push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" tls-auth ta.key 0 # This file is secret user nobody group nogroup log-append openvpn.log iptables config sudo iptables -A INPUT -i tap0 -j ACCEPT sudo iptables -A INPUT -i br0 -j ACCEPT sudo iptables -A FORWARD -i br0 -j ACCEPT sudo iptables -t nat -A POSTROUTING -s W.X.Y.0/24 -o eth0 -j MASQUERADE echo 1 > /proc/sys/net/ipv4/ip_forward Routing Tables added route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface W.X.Y.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 0.0.0.0 W.X.Y.1 0.0.0.0 UG 0 0 0 br0 C:>route print =========================================================================== Interface List 32...00 ff ac d6 f7 04 ......TAP-Win32 Adapter V9 15...00 14 d1 e9 57 49 ......Microsoft Virtual WiFi Miniport Adapter #2 14...00 14 d1 e9 57 49 ......Realtek RTL8191SU Wireless LAN 802.11n USB 2.0 Net work Adapter 10...00 1f d0 50 1b ca ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 11...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 17...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 18...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 36...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.2.1 10.1.2.201 25 10.1.2.0 255.255.255.0 On-link 10.1.2.201 281 10.1.2.201 255.255.255.255 On-link 10.1.2.201 281 10.1.2.255 255.255.255.255 On-link 10.1.2.201 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.2.201 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.2.201 281 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.1.2.1 Default =========================================================================== C:>tracert google.com Tracing route to google.com [74.125.71.147] over a maximum of 30 hops: 1 344 ms 345 ms 343 ms PC [W.X.Y.221] 2 * * * Request timed out.

    Read the article

  • How can I prevent an app from leaving full screen mode when I move the cursor to another display?

    - by dan
    When I have dual displays set up, or when I am using Synergy to use one keyboard and mouse across two computers/screens, I can't seem to retain F11 full screen mode for the top application when I mouse out of that screen. This applies to both the application and also to any Flash video that may be playing in full screen mode. Is there any way to retain full screen mode and mouse out of the display?

    Read the article

  • Ubuntu won't display netbook's native resolution

    - by Daniel
    FYI: My Netbook model is HP Mini 210-1004sa, which comes with Intel Graphics Media Accelerator 3150, and has a display 10.1" Active Matrix Colour TFT 1024 x 600. I recently removed Windows 7 Starter from my netbook, and replaced it with Ubuntu 12.10. The problem is the OS doesn't seem to recognise the native display resolution of 1024x600 i.e. the bottom bits of Ubuntu is hidden beneath the screen & the only 2 available resolutions are: the default 1024x768 and 800x600. I've also thought about replacing Ubuntu with Lubuntu or Puppy Linux, as the system does run a bit slow, but I can't, as then I won't be able to access the taskbar and application menu which will be hidden beneath the screen. Only Ubuntu with Unity is currently usable, as the Unity Launcher is visible enough. I was able to define a custom resolution 1024x600 using the Q&A: How set my monitor resolution? but when I set that resolution, there appears a black band at the top of the screen and the desktop area is lowered, with bits of it hidden beneath the screen. I tried leaving it at this new resolution and restarting the system to see if the black band would disappear & the display will fit correctly, but it gets reset to 1024x768 at startup and displays following error: Could not apply the stored configuration for monitors none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1)

    Read the article

  • cannot mount root filesystem on 10.04

    - by howaryoo
    I tried to apply the recommendation of question: Kernel Panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) After running that command: sudo mount --bind /dev /mnt/dev I get this error message: mount: mount point /mnt/dev does not exist fdisk -l returns /dev/sda1 /dev/sda2 /dev/sda5 do I need to mount sda2 and sda5? Edited after @psusi's comment: /dev/sda1 is the boot file system It seems that I need to mount sda2 or sda5. Here is what I tried: (I tried this on a virtual machine so the sda(s) are now vda(s) ) ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/vda: 19.3 GB, 19327352832 bytes 16 heads, 63 sectors/track, 37449 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0008eece Device Boot Start End Blocks Id System /dev/vda1 * 3 496 248832 83 Linux Partition 1 does not end on cylinder boundary. /dev/vda2 498 37448 18622465 5 Extended Partition 2 does not end on cylinder boundary. /dev/vda5 498 37448 18622464 8e Linux LVM ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ sudo mount -t ext4 /dev/vda5 /mnt mount: wrong fs type, bad option, bad superblock on /dev/vda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ sudo mount -t ext2 /dev/vda5 /mnt mount: wrong fs type, bad option, bad superblock on /dev/vda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ Any info that can help me rescue that server would be greatly appreciated!

    Read the article

  • Ubuntu Linux won't display netbook's native resolution

    - by Daniel
    FYI: My Netbook model is HP Mini 210-1004sa, which comes with Intel Graphics Media Accelerator 3150, and has a display 10.1" Active Matrix Colour TFT 1024 x 600. I recently removed Windows 7 Starter from my netbook, and replaced it with Ubuntu 12.10. The problem is the OS doesn't seem to recognise the native display resolution of 1024x600 i.e. the bottom bits of Ubuntu is hidden beneath the screen & the only 2 available resolutions are: the default 1024x768 and 800x600. I've also thought about replacing Ubuntu with Lubuntu or Puppy Linux, as the system does run a bit slow, but I can't, as then I won't be able to access the taskbar and application menu which will be hidden beneath the screen. Only Ubuntu with Unity is currently usable, as the Unity Launcher is visible enough. I was able to define a custom resolution 1024x600 using the Q&A: How set my monitor resolution? but when I set that resolution, there appears a black band at the top of the screen and the desktop area is lowered, with bits of it hidden beneath the screen. I tried leaving it at this new resolution and restarting the system to see if the black band would disappear & the display will fit correctly, but it gets reset to 1024x768 at startup and displays following error: Could not apply the stored configuration for monitors none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1)

    Read the article

  • Change administrator username

    - by Fazlan
    I have accidentally typed my name wrong when I created the administrator account. Although I managed to change the user name at the login screen, I am unable to rename the /home/oldusername to /home/newusername. I tried most of the online tutorials, and it failed. The code I tried was this: usermod -l newusername -m -d /home/newusername oldusername But the output is: cannot lock /etc/passwd; try again later. How can I fix the issue and change the folder to newusername and expect all the applications to work as before?

    Read the article

  • How to fix "The system is running in low-graphics mode" error?

    - by jokerdino
    Note: This is an attempt to create a canonical question that covers all instances of "low-graphics mode" error that occurs to a user, including but not limited to installation of wrong drivers, incorrect or invalid lightdm greeters, low disk space, incorrect installation of graphics card like ATI and Nvidia, incorrect configuration of xorg.conf file while setting up multiple monitors among others. If you are experiencing the "low-graphics mode" error when trying to login but none of the following answers work for you, please do ask a new question and then update the answers of this canonical question as and when your new question gets answered. When I try to boot into my computer, I am getting this error: The system is running in low-graphics mode Your screen, graphics cards, and input device settings could not be detected correctly. You will need to configure these yourself. How do I fix the failsafe X mode and login into my computer? Answer index: The greeter is invalid

    Read the article

  • No internet - please help?

    - by All
    I just got Ubuntu and I am really really really a linux novice. I can't get the wireless to work? I played around with it until I saw it says "wireless firmware missing" - Hardware address 00:14:A5:6A:17:C2. On my laptop there is a button that looks like a little antenae that means the wireless is on. The funny thing is that this button does not seem to do anything right now - the light is not lit up and does not light up. Any thoughts you have on getting this going are appreciated!

    Read the article

  • How do I turn off nVidia high performance mode?

    - by gpen06
    I am new to Ubuntu 11.10 and never touched linux until today. I installed Ubuntu alongside with Windows 7. I have an issue with my laptop overheating and freezing when it uses the high performance GPU graphics mode (hybrid card). It is an easy fix for Windows 7. I simply set the graphics mode manually to the low performance integrated graphics (save power) mode using advanced power settings. In Ubuntu, I'm stumped at how to do this. When using Ubuntu, my laptop is showing the same symptoms as it did without the fix for Windows. I have the nVidia graphics driver installed for Windows which allows me to see which mode I am in. I downloaded it off of my laptop's website (ASUS). They do not offer driver downloads for linux. Screenshot of power settings in Windows 7

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Problems with real-valued deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Measuring "real" phone signal strength on a mobile phone

    - by Serafeim
    I want to programmatically measure the phone signal strength in a mobile phone. I don't actually care about the mobile phone or the programming environment: It can be based on android or windows mobile or even J2ME and can be from any manufacturer (please no iPhone). However, it needs to be a real, commercial mobile phone and not a special measurement device. This problem is not as easy as it seems with a first look. I am aware that there already exist a number of methods that claim to return the phone signal strength. Some of these are: SystemState.PhoneSignalStrength for WM6 RIL_GetCellTowerInfo for WinCe (dwRxLevel member of returned RILCELLTOWERINFO struct) android.telephony.NeighboringCellInfo.getRssi() for android The problem with the above is that they only return a few (under 10) discrete values, meaning that, for instance, the return values of SystemState.PhoneSignalStrength can only be translated to (for instance) -100 dbm, -90 dbm, -80 dbm, -70 dbm and -60 dbm, something that is not useful for my application, since I'd like to have as much precision as possible. It doesn't matter if there is an undocumented solution that only works on only one phone, if you can tell me a way I'd be grateful. Thanks in advance

    Read the article

  • How can i handle a form submit using REAL OOP in PHP

    - by Lorence
    Im used to java and creating UML.. and i was wondering how can PHP be OOP, the objects live only until you make a request.. then they destroy, so if im using a database is useless to create a class and add the members (variables) to the class, they will be useless.. i cant pass the main system object from one page to another, or similar so how can PHP be compare to jave? you never do OOP .. i mean REAL OOP.. not creating classes , in fact your index will be a procedural file with some object instance and then ? how about if i make a html form and i want to submit the data.. i have to call a file which is not a class is a php procedural file were i grab the submited data with POST, from that file you will instance a class and do some logic there.. but for me thats not pure OOP.. can somebody point me to the right way of OOP using a form submit example ? Thanks!

    Read the article

  • Which knowledge base/rule-based inference engine to choose for real time Runway incursion prevention

    - by Piligrim
    Hello, we are designing a project that would listen to dialog between airport controllers and pilots to prevent runway incursions (eg. one airplane is taking off while other is crossing the runway). Our professor wants us to use Jena for knowledge base (or anything else but it should be some sort of rule-based engine). Inference is not the main thing in Jena and there's not much documentation and examples of this. So we need an engine that would get messages from pilots as input and output possible risks of incursion or any other error in message protocol. It should be easy to write rules, and should be easy to provide engine with real time data. I image it something like this: A pilot sends a message that he lands on some runway, the system remembers that the runway is busy and no one should cross it If someone is given an instruction to cross this runway, the engine should fire a rule that something is wrong When the pilot sends a message that he left the runway and goes to the gate, the system clears the runway and lets other planes to use it. So is Jena, or prolog or any other rules engine suitable for this? I mean it is suitable, but do we really need to use it? I asked the prof. if we could just keep state of the runway and use some simple checks based on messages we receive and he said that it is not scalable and we need the knowledge base. Can someone give me any advise on which approach to use for this system? If you recommend k.b., then which one should we use? The project is written in java. Thank you.

    Read the article

  • Avoiding anemic domain model - a real example

    - by cbp
    I am trying to understand Anemic Domain Models and why they are supposedly an anti-pattern. Here is a real world example. I have an Employee class, which has a ton of properties - name, gender, username, etc public class Employee { public string Name { get; set; } public string Gender { get; set; } public string Username { get; set; } // Etc.. mostly getters and setters } Next we have a system that involves rotating incoming phone calls and website enquiries (known as 'leads') evenly amongst sales staff. This system is quite complex as it involves round-robining enquiries, checking for holidays, employee preferences etc. So this system is currently seperated out into a service: EmployeeLeadRotationService. public class EmployeeLeadRotationService : IEmployeeLeadRotationService { private IEmployeeRepository _employeeRepository; // ...plus lots of other injected repositories and services public void SelectEmployee(ILead lead) { // Etc. lots of complex logic } } Then on the backside of our website enquiry form we have code like this: public void SubmitForm() { var lead = CreateLeadFromFormInput(); var selectedEmployee = Kernel.Get<IEmployeeLeadRotationService>() .SelectEmployee(lead); Response.Write(employee.Name + " will handle your enquiry. Thanks."); } I don't really encounter many problems with this approach, but supposedly this is something that I should run screaming from because it is an Anemic Domain Model. But for me its not clear where the logic in the lead rotation service should go. Should it go in the lead? Should it go in the employee? What about all the injected repositories etc that the rotation service requires - how would they be injected into the employee, given that most of the time when dealing with an employee we don't need any of these repositories?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >