Search Results

Search found 7891 results on 316 pages for 'multi layer perceptron'.

Page 298/316 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections [closed]

    - by Hammer Bro.
    Note: I accidentally originally posted this question over at SuperUser, and I still think the issue is caused by some low-level networking practice of Windows 7, but I think the expertise here would be more apt to figuring it out. Apologies for the cross-post. Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • Can't configure PAM + LDAP on Debian Lenny - Getting error=49 on server logs

    - by Jorge Suárez de Lis
    I've been migrating some servers and desktops using Ubuntu 10.04 from getting the users from an old OpenLDAP implementation to a newer Centos Active Directory. I haven't had any problems so far, until I reached a Debian Lenny server. I've set up the server as the others, setting /etc/ldap.conf and /etc/ldap/ldap.conf. However, when I issue "getent passwd", I get nothing from the LDAP server. Reading the pam_ldap manpage, I realized that /etc/ldap.conf was not an accepted file by pam_ldap -it worked with Ubuntu though-, so I renamed it to /etc/pam_ldap.conf. Same result. However, once I've changed the name of this file, when I login using SSH I get this on the LDAP server logs: [20/Jul/2012:11:19:40 +0200] conn=16501 fd=155 slot=155 connection from x.x.x.50 to 10.1.176.237 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 RESULT err=49 tag=97 nentries=0 etime=0 The password isn't working. I don't know that could be wrong, anything else seems to be OK. That user/password is working from another clients: [20/Jul/2012:11:29:39 +0200] conn=16528 fd=188 slot=188 connection from x.x.x.224 to 10.1.176.237 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=jorge.suarez,ou=people,ou=citius,dc=inv,dc=usc,dc=es" I'm using SSHA for storing passwords on the LDAP server. Maybe this is not supported by Debian Lenny? On pam_ldap.conf, I've set up this, as in all the other servers: # Do not hash the password at all; presume # the directory server will do it, if # necessary. This is the default. pam_password md5 Also tried clear, but it didn't work. Anyways, it's weird that issuing getent passwd still gets me no users. However, if I use pamtest from the package libpam-dotfile to test login, it works. # pamtest ssh jorge.suarez Trying to authenticate <jorge.suarez> for service <ssh>. Password: Authentication successful. # pamtest foo jorge.suarez Trying to authenticate <jorge.suarez> for service <foo>. Password: Authentication successful. But "su" won't work also: # su jorge.suarez Id. descoñecido: jorge.suarez Just the output from getent passwd : # getent passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/bin/sh man:x:6:12:man:/var/cache/man:/bin/sh lp:x:7:7:lp:/var/spool/lpd:/bin/sh mail:x:8:8:mail:/var/mail:/bin/sh news:x:9:9:news:/var/spool/news:/bin/sh uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh list:x:38:38:Mailing List Manager:/var/list:/bin/sh irc:x:39:39:ircd:/var/run/ircd:/bin/sh gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh nobody:x:65534:65534:nobody:/nonexistent:/bin/sh libuuid:x:100:101::/var/lib/libuuid:/bin/sh Debian-exim:x:101:103::/var/spool/exim4:/bin/false statd:x:102:65534::/var/lib/nfs:/bin/false sshd:x:104:65534::/var/run/sshd:/usr/sbin/nologin luser:x:1000:1000:Usuario local de Burdeos,,,:/home/luser:/bin/bash messagebus:x:105:107::/var/run/dbus:/bin/false sge-admin:x:1001:1001:Administrador do SGE,,,:/home/cluster/sge-admin:/bin/bash ntp:x:107:110::/home/ntp:/bin/false haldaemon:x:108:111:Hardware abstraction layer,,,:/var/run/hald:/bin/false vde2-net:x:109:114::/var/run/vde2:/bin/false uml-net:x:110:115::/home/uml-net:/bin/false polkituser:x:111:116:PolicyKit,,,:/var/run/PolicyKit:/bin/false Debian-pxe:x:113:65534:Dummy user for Debian pxe package,,,:/home/Debian-pxe:/bin/false Nscd was stopped from the beginning.

    Read the article

  • Postmortem debugging with WinDBG.

    - by Drazar
    I have an WCF-service running on an server, and occasionally(1-2 times every month) it throws an COMException with the informative message ”Unknown error (0x8005008)”. When i googled for this particular error I only got threads about problems when creating virtual directories in IIS. And the source code hasn’t anything with making a virtual directory in IIS. DirectoryServiceLib.LdapProvider.Directory - CreatePost - Could not create employee for 195001010000,000000000000: System.Runtime.InteropServices.COMException (0x80005008): Unknown error (0x80005008) at System.DirectoryServices.PropertyValueCollection.PopulateList I've taken a memorydump when I catch the Exception for further analysis in WinDBG. After switching to the right thread I executed the !CLRStack command: 000000001b8ab6d8 000000007708671a [NDirectMethodFrameStandalone: 000000001b8ab6d8] Common.MemoryDump.MiniDumpWriteDump(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab680 000007ff002808d8 DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab780 000007ff00280812 Common.MemoryDump.CreateMiniDump(System.String) 000000001b8ab7e0 000007ff0027b218 DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8ad6d8 000007fef8816869 [HelperMethodFrame: 000000001b8ad6d8] 000000001b8ad820 000007feec2b6c6f System.DirectoryServices.PropertyValueCollection.PopulateList() 000000001b8ad860 000007feec225f0f System.DirectoryServices.PropertyValueCollection..ctor(System.DirectoryServices.DirectoryEntry, System.String) 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) 000000001b8ad8f0 000007ff00274d34 Common.DirectoryEntryExtension.GetStringAttribute(System.String) 000000001b8ad940 000007ff0027f507 DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) 000000001b8ad980 000007ff0027a7cf DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adbe0 000007ff00279532 DirectoryServiceLib.WCFDirectory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adc60 000007ff001f47bd DynamicClass.SyncInvokeCreatePost(System.Object, System.Object[], System.Object[]) My conclusion is that it fails when the code is calling System.DirectoryServices.PropertyCollection.get_Item(System.String). So after issuing an !CLRStack -a I get this result: 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) PARAMETERS: this = <no data> propertyName = <no data> LOCALS: <CLR reg> = 0x0000000001dcef78 <no data> My very first question is why does it display no data on the propertyname? I am kinda new on Windbg. However I executed an dumpobject on = 0x0000000001dcef78: 0:013> !do 0x0000000001dcef78 Name: System.String MethodTable: 000007fef66d6960 EEClass: 000007fef625eec8 Size: 74(0x4a) bytes File: C:\Windows\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll String: personalprescriptioncode Fields: MT Field Offset Type VT Attr Value Name 000007fef66dc848 40000ed 8 System.Int32 1 instance 24 m_stringLength 000007fef66db388 40000ee c System.Char 1 instance 70 m_firstChar 000007fef66d6960 40000ef 10 System.String 0 shared static Empty >> Domain:Value 0000000000174e10:00000000019d1420 000000001a886f50:00000000019d1420 << So when the source code wants to fetch the personalprescriptioncode from Active Directory(what is used for persistence layer) it fails. Looking back at the stack it is when issuing the Copy method. DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) So looking in the sourcecode: DirectoryPost postInLimbo = DirectoryPostFactory.Instance().GetDirectoryPost(LdapConfigReader.Instance().GetConfigValue("LimboDN"), idGenPerson.ID.UserId); if (postInLimbo != null) newPost.Copy(postInLimbo); This code is looking for another post in OU=limbo with the same UserId and if it finds one it copies the attributes to the new post. In this case it does and it fails with personalprescriptioncode. I've looked in Active Directory under OU=Limbo and the post exist there with the attribute personalprescriptioncode=31243. Question 1: Why does it display no data for some of the PARAMETERS and LOCALS? Is it the GC who has cleaned up before the memorydump had been created. Question 2: Is there anymore i can do to get to the solution to this problem?

    Read the article

  • Thread Synchronisation 101

    - by taspeotis
    Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I could put the CCriticalSection inside a tighter code block) but my code doesn't crash and it runs fast enough. At work my multithreaded code needs to be a tighter, only locking/synchronising at the lowest level needed. At work I was trying to debug some multithreaded code, and I came across this: EnterCriticalSection(&m_Crit4); m_bSomeVariable = true; LeaveCriticalSection(&m_Crit4); Now, m_bSomeVariable is a Win32 BOOL (not volatile), which as far as I know is defined to be an int, and on x86 reading and writing these values is a single instruction, and since context switches occur on an instruction boundary then there's no need for synchronising this operation with a critical section. I did some more research online to see whether this operation did not need synchronisation, and I came up with two scenarios it did: The CPU implements out of order execution or the second thread is running on a different core and the updated value is not written into RAM for the other core to see; and The int is not 4-byte aligned. I believe number 1 can be solved using the "volatile" keyword. In VS2005 and later the C++ compiler surrounds access to this variable using memory barriers, ensuring that the variable is always completely written/read to the main system memory before using it. Number 2 I cannot verify, I don't know why the byte alignment would make a difference. I don't know the x86 instruction set, but does mov need to be given a 4-byte aligned address? If not do you need to use a combination of instructions? That would introduce the problem. So... QUESTION 1: Does using the "volatile" keyword (implicity using memory barriers and hinting to the compiler not to optimise this code) absolve a programmer from the need to synchronise a 4-byte/8-byte on x86/x64 variable between read/write operations? QUESTION 2: Is there the explicit requirement that the variable be 4-byte/8-byte aligned? I did some more digging into our code and the variables defined in the class: class CExample { private: CRITICAL_SECTION m_Crit1; // Protects variable a CRITICAL_SECTION m_Crit2; // Protects variable b CRITICAL_SECTION m_Crit3; // Protects variable c CRITICAL_SECTION m_Crit4; // Protects variable d // ... }; Now, to me this seems excessive. I thought critical sections synchronised threads between a process, so if you've got one you can enter it and no other thread in that process can execute. There is no need for a critical section for each variable you want to protect, if you're in a critical section then nothing else can interrupt you. I think the only thing that can change the variables from outside a critical section is if the process shares a memory page with another process (can you do that?) and the other process starts to change the values. Mutexes would also help here, named mutexes are shared across processes, or only processes of the same name? QUESTION 3: Is my analysis of critical sections correct, and should this code be rewritten to use mutexes? I have had a look at other synchronisation objects (semaphores and spinlocks), are they better suited here? QUESTION 4: Where are critical sections/mutexes/semaphores/spinlocks best suited? That is, which synchronisation problem should they be applied to. Is there a vast performance penalty for choosing one over the other? And while we're on it, I read that spinlocks should not be used in a single-core multithreaded environment, only a multi-core multithreaded environment. So, QUESTION 5: Is this wrong, or if not, why is it right? Thanks in advance for any responses :)

    Read the article

  • Which option do you like the most?

    - by spderosso
    I need to show a list of movies ordered by the date they were registered in the system and the amount of comments users have made about them. E.g: Title | Register Date | # of comments Gladiator | 02-01-2010 | 30 Matrix | 01-02-2010 | 20 It is a web application that follows MVC. So, I have the object "Movie", there is a MovieManager interface that is used for retrieving/persisting data to the database and a servlet RecentlyAddedMovies.java that forwards to a .jsp to render the list. I will explain my problem by showing an example of how I am doing another similar task: Showing top ranked movies. Regarding the MovieManager: /** * Returns a collection of the x best-ranked movies * @param x the amount of movies to return * @return A collection of movies * @throws SQLException */ public Collection<Movie> getTopX(int x) throws SQLException; The servlet: public class Top5 extends HttpServlet{ @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { MovieManager movManager = JDBCMovieManager.getInstance(); try { req.setAttribute("movies", movManager.getTopX(5)); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } req.getRequestDispatcher("listMovies.jsp").forward(req, resp); } } And the listMovies.jsp has is some place something like: <c:forEach var="movie" items="${movies}"> ... <c:out value="${movie.title}"/> ... </c:forEach> The difference between the top5 problem and the problem I am trying to solve is that with the top5 thing all the data I needed to show in the view where part of the "Movie" object, so the MovieManager simply returned a Collection of Movies. Now, the MovieManager must retrieve an ordered list of (Movie, # of comments). The Movie object has a collection of Comment associated but of course, it lacks an instance variable with the length of that collection. The following things came into my mind: a) Create a MovieView object or something like that, that is only used by the "View" and "Controller" layer of the MVC which is not part of the "Model" in the sense that is used only for easy retrieval and display of information. This MovieView object will have a title, registerDate and # of Comments so the thing would look something like this: /** * Returns a collection of the x most recently added movies * @param x the amount of movies to return * @return A collection of movies * @throws SQLException */ public Collection<Movie2> getXRecentlyAddedMovies(int x) throws SQLException; The .jsp: <c:forEach var="movie2" items="${movies2}"> ... <c:out value="${movie2.title}"/> <c:out value="${movie2.registerDate}"/> <c:out value="${movie2.noOfComments}"/> ... </c:forEach> b) Add to the Collection the movie has, empty Comments (there is no actual need to retrieve all the comment data from the database) so that the comment's collection.size() will show the number of comments that movie has. (This sounds horrible to me) c) Add to the "Movie" object an instance field noOfComments with the setter and getter that will help with this type of cases even if, looking only from the object point of view, it is a redundant field since it already has a Collection that is ".size()"able . d) Make the MovieManager retrieve some structure like a Map or a Collection of a two component array or something like that that will contain both a Movie object and the associated number of comments and with JSLT iterate over it and show it accordingly.

    Read the article

  • New project created with Flex Mojo's archetype throws Cannot Find Parent Project-Maven Exception

    - by ignorant
    This is probably a silly question but I just cant seem to figure out. I'm completely new to flex and maven. Maven 2.2.1: Maven 2.2.1 unzipped,M2_HOME set and repository altered to point to different drive location in settings.xml Flex 4.0: Installed Created a multi-modular webapp project using flexmojo: mvn archetype:generate -DarchetypeRepository=http://repository.sonatype.org/content/groups/flexgroup -DarchetypeGroupId=org.sonatype.flexmojos -DarchetypeArtifactId=flexmojos-archetypes-modular-webapp -DarchetypeVersion=RELEASE with following options groupId=com.test artifactId=test version=1.0-snapshot package=com.tests * Creates * test |-- pom.xml |--swc -pom.xml |--swf -pom.xml `--war -pom.xml Parent pom has swc, swf, war as modules. Dependency is war-swf-swc. With parent artifactId of swf, swc, war set to swf, swc, test respectively. On executing mvn on test folder(for that matter clean or anything) I get this following error. G:\Projects\testmvn -e + Error stacktraces are turned on. [INFO] Scanning for projects... Downloading: http://repo1.maven.org/maven2/com/test/swc/1.0-snapshot/swc-1.0-snapshot.pom [INFO] Unable to find resource 'com.test:swc:pom:1.0-snapshot' in repository central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. GroupId: com.test ArtifactId: swc Version: 1.0-snapshot Reason: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.reactor.MavenExecutionException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:404) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:272) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(DefaultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromSourceFileInternal(DefaultMavenProjectBuilder.java:508) at org.apache.maven.project.DefaultMavenProjectBuilder.build(DefaultMavenProjectBuilder.java:200) at org.apache.maven.DefaultMaven.getProject(DefaultMaven.java:604) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:487) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:560) at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:391) ... 12 more Caused by: org.apache.maven.project.ProjectBuildingException: POM 'com.test:swc' not found in repository: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) for project com.test:swc at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:605) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1392) ... 19 more Caused by: org.apache.maven.artifact.resolver.ArtifactNotFoundException: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 20 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 22 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Tue Jun 15 19:22:15 GMT+02:00 2010 [INFO] Final Memory: 1M/2M [INFO] ------------------------------------------------------------------------ Looks like its trying to download the project from maven's central repository instead of building it. What am I missing?

    Read the article

  • Error Handling without Exceptions

    - by James
    While searching SO for approaches to error handling related to business rule validation , all I encounter are examples of structured exception handling. MSDN and many other reputable development resources are very clear that exceptions are not to be used to handle routine error cases. They are only to be used for exceptional circumstances and unexpected errors that may occur from improper use by the programmer (but not the user.) In many cases, user errors such as fields that are left blank are common, and things which our program should expect, and therefore are not exceptional and not candidates for use of exceptions. QUOTE: Remember that the use of the term exception in programming has to do with the thinking that an exception should represent an exceptional condition. Exceptional conditions, by their very nature, do not normally occur; so your code should not throw exceptions as part of its everyday operations. Do not throw exceptions to signal commonly occurring events. Consider using alternate methods to communicate to a caller the occurrence of those events and leave the exception throwing for when something truly out of the ordinary happens. For example, proper use: private void DoSomething(string requiredParameter) { if (requiredParameter == null) throw new ArgumentExpcetion("requiredParameter cannot be null"); // Remainder of method body... } Improper use: // Renames item to a name supplied by the user. Name must begin with an "F". public void RenameItem(string newName) { // Items must have names that begin with "F" if (!newName.StartsWith("F")) throw new RenameException("New name must begin with /"F/""); // Remainder of method body... } In the above case, according to best practices, it would have been better to pass the error up to the UI without involving/requiring .NET's exception handling mechanisms. Using the same example above, suppose one were to need to enforce a set of naming rules against items. What approach would be best? Having the method return a enumerated result? RenameResult.Success, RenameResult.TooShort, RenameResult.TooLong, RenameResult.InvalidCharacters, etc. Using an event in a controller class to report to the UI class? The UI calls the controller's RenameItem method, and then handles an AfterRename event that the controller raises and that has rename status as part of the event args? The controlling class directly references and calls a method from the UI class that handles the error, e.g. ReportError(string text). Something else... ? Essentially, I want to know how to perform complex validation in classes that may not be the Form class itself, and pass the errors back to the Form class for display -- but I do not want to involve exception handling where it should not be used (even though it seems much easier!) Based on responses to the question, I feel that I'll have to state the problem in terms that are more concrete: UI = User Interface, BLL = Business Logic Layer (in this case, just a different class) User enters value within UI. UI reports value to BLL. BLL performs routine validation of the value. BLL discovers rule violation. BLL returns rule violation to UI. UI recieves return from BLL and reports error to user. Since it is routine for a user to enter invalid values, exceptions should not be used. What is the right way to do this without exceptions?

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • PHP mail send code not working

    - by acctman
    I'm trying to use this coding but its not processing, and its not outputing any errors. function send_email($subject='Activate Your Account', $html_content, $text_content, $headers) { $en['email'] = '[email protected]'; $to = $en['email']; $en['memb_id'] = '39'; $en['confcode'] = '69696969'; $en['user'] = 'couple'; $text_content = "Confirm Your domain.com Account\r\n"; $text_content.= "UserName: " . $en['user'] . "\r\n"; $text_content.= "Activate Your Account by visiting this link below:\r\n"; $text_content.= "http://www.domain.com/confirm/" . $en['memb_id'] . "/" . $en['confcode'] . "\r\n"; $text_content.= "\r\n"; $text_content.= "______________________\r\n"; $text_content.= "Thanks,\r\n"; $text_content.= "Staff"; $html_content = "<html><body><h1>Confirm Your domain.com Account</h1>"; $html_content.= "<p>UserName: " . $en['user'] . "<br>"; $html_content.= "Activate Your Account by visiting this link below:<br>"; $html_content.= "<a href=http://www.domain.com/confirm/" . $en['memb_id'] . "/" . $en['confcode'] . ">http://www.domain.com/confirm/" . $en['memb_id'] . "/" . $en['confcode'] . "</a>"; $html_content.= "</p>"; $html_content.= "______________________<br>"; $html_content.= "Thanks,<br>"; $html_content.= " Staff"; $html_content.= "</body></html>"; $mime_boundary = 'Multipart_Boundary_x'.md5(time()).'x'; $headers = "MIME-Version: 1.0\r\n"; $headers.= "Content-Type: multipart/alternative; boundary=\"$mime_boundary\"\r\n"; $headers.= "Content-Transfer-Encoding: 7bit\r\n"; $body = "This is a multi-part message in mime format.\n\n"; $body.= "--$mime_boundary\n"; $body.= "Content-Type: text/plain; charset=\"charset=us-ascii\"\n"; $body.= "Content-Transfer-Encoding: 7bit\n\n"; $body.= $text_content; $body.= "\n\n"; $body.= "--$mime_boundary\n"; $body.= "Content-Type: text/html; charset=\"UTF-8\"\n"; $body.= "Content-Transfer-Encoding: 7bit\n\n"; $body.= $html_content; $body.= "\n\n"; $body.= "--$mime_boundary--\n"; $headers.= 'From: <[email protected]>' . "\r\n"; $headers.= "X-Sender-IP: $_SERVER[SERVER_ADDR]\r\n"; $headers.= 'Date: '.date('n/d/Y g:i A')."\r\n"; $headers.= 'Reply-To: <[email protected]>' . "\r\n"; return mail($to, $subject, $body, $headers); echo $to; echo $subject; echo $body; echo $headers; }

    Read the article

  • A methology that allows for a single Java code base covering many different versions?

    - by Thorbjørn Ravn Andersen
    I work in a small shop where we have a LOT of legacy Cobol code and where a methology has been adopted to allow us to minimize forking and branching as much as possible. For a given release we have three levels: CORE - bottom layer, this code is common to all releases GROUP - optional code common to several customers. CUSTOMER - optional code specific for a single customer. When a program is needed, it is first searched for in CUSTOMER, then in GROUP and finally in CORE. A given application for us invokes many programs which all are looked for in this sequence (think exe files and PATH under Windows). We also have Java programs interacting with this legacy code, and as the core-group-customer lookup mehchanism does not lend it self easily to Java it has tended to grow in a CVS branch for each customer, requiring much too much maintainance. The Java part and the backend part tend to be developed in parallel. I have been assigned to figure out a way to make the two worlds meet. Essentially we want a Java enviornment which allows us to have a single code base with sources for each release, where we easily can select a group and a customer and work with the application as it goes for that customer, and then easily switch to another codeset and THAT customer. I was thinking of perhaps a scenario with an Eclipse project for each core, customer, and group and then use Project Sets to select those we need for a given scenario. The problem I cannot get my head about, is how we would create robust code in the CORE projects which will work regardless of which group and customer is selected. A Factory class which knows which sub class of a passed Class object to invoke instead of each and every new? Others must have had similar code base management problems. Anybody with experiences to share? EDIT: The conclusion to this problem above has been that CVS needs to be replaced with a source code management system better suited for dealing with many branches concurrently and the migration of source from one component to the other while keeping history. Inspired by the recent migration by slf4j and logback we are currently looking at git as it handles branches very well. We've considered subversion and mercurial too but git appears to be better for single location, multibranched projects. I've asked about Perforce in another question, but my personal inclination is towards open source solutions for something as crucial as this. EDIT: After some more pondering, we've found that our actual pain point is that we use branches in CVS, and that branches in CVS are the easiest to work with if you branch ALL files! The revised conclusion is that we can do this with CVS alone, by switching to a forest of java projects, each corresponding to one of the levels above, and use the Eclipse build paths to tie them together so each CUSTOMER version pulls in the appropriate GROUP and CORE project. We still want to switch to a better versioning system but this is so important a decision so we want to delay it as much as possible. EDIT: I now have a proof-of-concept implementation of the CORE-GROUP-CUSTOMER concept using Google Guice 2.0 - the @ImplementedBy tag is just what we need. I wonder what everybody else does? Using if's all over the place? EDIT: Now I also need this functionality for web applications. Guice was until the JSR-330 is in place. Anybody with versioning experience? EDIT: JSR-330/299 is now in place with the JEE6 reference implementation Weld based on JBoss Seam and I have reimplemented the proof-of-concept with Weld and can see that if we use @Alternative along with ... in beans.xml we can get the behaviour we desire. I.e. provide a new implementation for a given functionality in CORE without changing a bit in the CORE jars. Initial reading up on the Servlet 3.0 specification indicates that it may support the same functionality for web application resources (not code). We will now do initial testing on the real application.

    Read the article

  • Sending emails in web applications

    - by Denise
    Hi everyone, I'm looking for some opinions here, I'm building a web application which has the fairly standard functionality of: Register for an account by filling out a form and submitting it. Receive an email with a confirmation code link Click the link to confirm the new account and log in When you send emails from your web application, it's often (usually) the case that there will be some change to the persistence layer. For example: A new user registers for an account on your site - the new user is created in the database and an email is sent to them with a confirmation link A user assigns a bug or issue to someone else - the issue is updated and email notifications are sent. How you send these emails can be critical to the success of your application. How you send them depends on how important it is that the intended recipient receives the email. We'll look at the following four strategies in relation to the case where the mail server is down, using example 1. TRANSACTIONAL & SYNCHRONOUS The sending of the email fails and the user is shown an error message saying that their account could not be created. The application will appear to be slow and unresponsive as the application waits for the connection timeout. The account is not created in the database because the transaction is rolled back. TRANSACTIONAL & ASYNCHRONOUS The transactional definition here refers to sending the email to a JMS queue or saving it in a database table for another background process to pick up and send. The user account is created in the database, the email is sent to a JMS queue for processing later. The transaction is successful and committed. The user is shown a message saying that their account was created and to check their email for a confirmation link. It's possible in this case that the email is never sent due to some other error, however the user is told that the email has been sent to them. There may be some delay in getting the email sent to the user if application support has to be called in to diagnose the email problem. NON-TRANSACTIONAL & SYNCHRONOUS The user is created in the database, but the application gets a timeout error when it tries to send the email with the confirmation link. The user is shown an error message saying that there was an error. The application is slow and unresponsive as it waits for the connection timeout When the mail server comes back to life and the user tries to register again, they are told their account already exists but has not been confirmed and are given the option of having the email re-sent to them. NON-TRANSACTIONAL & ASYNCHRONOUS The only difference between this and transactional & asynchronous is that if there is an error sending the email to the JMS queue or saving it in the database, the user account is still created but the email is never sent until the user attempts to register again. What I'd like to know is what have other people done here? Can you recommend any other solutions other than the 4 I've mentioned above? What's a reasonable way of approaching this problem? I don't want to over-engineer a system that's dealing with the (hopefully) rare situation where my mail server goes down! The simplest thing to do is to code it synchronously, but are there any other pitfalls to this approach? I guess I'm wondering if there's a best practice, I couldn't find much out there by googling.

    Read the article

  • vSphere ESX 5.5 hosts cannot connect to NFS Server

    - by Gerald
    Summary: My problem is I cannot use the QNAP NFS Server as an NFS datastore from my ESX hosts despite the hosts being able to ping it. I'm utilising a vDS with LACP uplinks for all my network traffic (including NFS) and a subnet for each vmkernel adapter. Setup: I'm evaluating vSphere and I've got two vSphere ESX 5.5 hosts (node1 and node2) and each one has 4x NICs. I've teamed them all up using LACP/802.3ad with my switch and then created a distributed switch between the two hosts with each host's LAG as the uplink. All my networking is going through the distributed switch, ideally, I want to take advantage of DRS and the redundancy. I have a domain controller VM ("Central") and vCenter VM ("vCenter") running on node1 (using node1's local datastore) with both hosts attached to the vCenter instance. Both hosts are in a vCenter datacenter and a cluster with HA and DRS currently disabled. I have a QNAP TS-669 Pro (Version 4.0.3) (TS-x69 series is on VMware Storage HCL) which I want to use as the NFS server for my NFS datastore, it has 2x NICs teamed together using 802.3ad with my switch. vmkernel.log: The error from the host's vmkernel.log is not very useful: NFS: 157: Command: (mount) Server: (10.1.2.100) IP: (10.1.2.100) Path: (/VM) Label (datastoreNAS) Options: (None) cpu9:67402)StorageApdHandler: 698: APD Handle 509bc29f-13556457 Created with lock[StorageApd0x411121] cpu10:67402)StorageApdHandler: 745: Freeing APD Handle [509bc29f-13556457] cpu10:67402)StorageApdHandler: 808: APD Handle freed! cpu10:67402)NFS: 168: NFS mount 10.1.2.100:/VM failed: Unable to connect to NFS server. Network Setup: Here is my distributed switch setup (JPG). Here are my networks. 10.1.1.0/24 VM Management (VLAN 11) 10.1.2.0/24 Storage Network (NFS, VLAN 12) 10.1.3.0/24 VM vMotion (VLAN 13) 10.1.4.0/24 VM Fault Tolerance (VLAN 14) 10.2.0.0/24 VM's Network (VLAN 20) vSphere addresses 10.1.1.1 node1 Management 10.1.1.2 node2 Management 10.1.2.1 node1 vmkernel (For NFS) 10.1.2.2 node2 vmkernel (For NFS) etc. Other addresses 10.1.2.100 QNAP TS-669 (NFS Server) 10.2.0.1 Domain Controller (VM on node1) 10.2.0.2 vCenter (VM on node1) I'm using a Cisco SRW2024P Layer-2 switch (Jumboframes enabled) with the following setup: LACP LAG1 for node1 (Ports 1 through 4) setup as VLAN trunk for VLANs 11-14,20 LACP LAG2 for my router (Ports 5 through 8) setup as VLAN trunk for VLANs 11-14,20 LACP LAG3 for node2 (Ports 9 through 12) setup as VLAN trunk for VLANs 11-14,20 LACP LAG4 for the QNAP (Ports 23 and 24) setup to accept untagged traffic into VLAN 12 Each subnet is routable to another, although, connections to the NFS server from vmk1 shouldn't need it. All other traffic (vSphere Web Client, RDP etc.) goes through this setup fine. I tested the QNAP NFS server beforehand using ESX host VMs atop of a VMware Workstation setup with a dedicated physical NIC and it had no problems. The ACL on the NFS Server share is permissive and allows all subnet ranges full access to the share. I can ping the QNAP from node1 vmk1, the adapter that should be used to NFS: ~ # vmkping -I vmk1 10.1.2.100 PING 10.1.2.100 (10.1.2.100): 56 data bytes 64 bytes from 10.1.2.100: icmp_seq=0 ttl=64 time=0.371 ms 64 bytes from 10.1.2.100: icmp_seq=1 ttl=64 time=0.161 ms 64 bytes from 10.1.2.100: icmp_seq=2 ttl=64 time=0.241 ms Netcat does not throw an error: ~ # nc -z 10.1.2.100 2049 Connection to 10.1.2.100 2049 port [tcp/nfs] succeeded! The routing table of node1: ~ # esxcfg-route -l VMkernel Routes: Network Netmask Gateway Interface 10.1.1.0 255.255.255.0 Local Subnet vmk0 10.1.2.0 255.255.255.0 Local Subnet vmk1 10.1.3.0 255.255.255.0 Local Subnet vmk2 10.1.4.0 255.255.255.0 Local Subnet vmk3 default 0.0.0.0 10.1.1.254 vmk0 VM Kernel NIC info ~ # esxcfg-vmknic -l Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type vmk0 133 IPv4 10.1.1.1 255.255.255.0 10.1.1.255 00:50:56:66:8e:5f 1500 65535 true STATIC vmk0 133 IPv6 fe80::250:56ff:fe66:8e5f 64 00:50:56:66:8e:5f 1500 65535 true STATIC, PREFERRED vmk1 164 IPv4 10.1.2.1 255.255.255.0 10.1.2.255 00:50:56:68:f5:1f 1500 65535 true STATIC vmk1 164 IPv6 fe80::250:56ff:fe68:f51f 64 00:50:56:68:f5:1f 1500 65535 true STATIC, PREFERRED vmk2 196 IPv4 10.1.3.1 255.255.255.0 10.1.3.255 00:50:56:66:18:95 1500 65535 true STATIC vmk2 196 IPv6 fe80::250:56ff:fe66:1895 64 00:50:56:66:18:95 1500 65535 true STATIC, PREFERRED vmk3 228 IPv4 10.1.4.1 255.255.255.0 10.1.4.255 00:50:56:72:e6:ca 1500 65535 true STATIC vmk3 228 IPv6 fe80::250:56ff:fe72:e6ca 64 00:50:56:72:e6:ca 1500 65535 true STATIC, PREFERRED Things I've tried/checked: I'm not using DNS names to connect to the NFS server. Checked MTU. Set to 9000 for vmk1, dvSwitch and Cisco switch and QNAP. Moved QNAP onto VLAN 11 (VM Management, vmk0) and gave it an appropriate address, still had same issue. Changed back afterwards of course. Tried initiating the connection of NAS datastore from vSphere Client (Connected to vCenter or directly to host), vSphere Web Client and the host's ESX Shell. All resulted in the same problem. Tried a path name of "VM", "/VM" and "/share/VM" despite not even having a connection to server. I plugged in a linux system (10.1.2.123) into a switch port configured for VLAN 12 and tried mounting the NFS share 10.1.2.100:/VM, it worked successfully and I had read-write access to it I tried disabling the firewall on the ESX host esxcli network firewall set --enabled false I'm out of ideas on what to try next. The things I'm doing differently from my VMware Workstation setup is the use of LACP with a physical switch and a virtual distributed switch between the two hosts. I'm guessing the vDS is probably the source of my troubles but I don't know how to fix this problem without eliminating it.

    Read the article

  • Wireless internet connection connects but internet does not work (no packets received). Wired does.

    - by Rodney
    When I connect my PC via ethernet cable to my ADSL router it works fine. When I connect via Wireless it connects and the internet will work for a random amount of time and then stop working. It stays connected with a strong signal but no packets are received. My laptop/iphone are right next to it and wireless works fine. If I open the Wireless USB status, it says it is connected to my SSID with full strength (54 mps - I am 3 meteres away from my router) and the activty shows as Packets 594 SENT and 105 RECEIVED (this goes up VERY slowly) I have tried the following: Turned off anitvirus and firewall completely. Tested the wifi signal- I am writing this on my laptop which is next to my PC and also has full wifi strength. Tried a different wireless adapter - I dug out an old PCI wireless card - it does the exact same thing. Compared all wireless settings to my laptop. I can ping google.com and it replies (sometimes with packet loss) When I reboot the PC it will connect for a minute or two (random time) and then just stops again. I tried Firefox, IE etc. no joy I have updated all latest versions (Netgear WG111v2) and drivers Checked Event Log - nothing unusual Ping the router (and even connect as admin for the few minutes when the internet does work) Changed the MTU down to 1200 using DrTCP Checked Device Manager for conflicts - none. I ping the router from the PC (192.168.0.10 - 192.168.0.1) and it replies with 4 packets. BUT, on my router admin page (which I access via http on my laptop wirelessly) - if I ping 192.168.0.10 all packets timeout (pinging my laptop 192.168.0.12 works fine) My router admin page shows the leased IP address for 192.168.0.10 (ie it is definitely talking to the router initially) Now I am out of ideas - please help. I think it is an OS/Software issue as I have tried 2 different wireless adapaters (PCI and USB) with the same result but all other wireless devices work fine around mine). It's not the firewall. It is getting assigned an IP address correctly (my PC gets 192.168.0.10, my laptop is .12) It is assigned by DHCP. As soon as I plug in the ethernet cable it all works fine. Repairing the adapter sometimes helps but it will always stop working after a random time. The wireless adapter always shows as connected with Excellent signal but the internet does not work. I am running Windows XP SP3 and have tried a Netgear WG111v2 USB adapter. Thanks in advance! UPDATE: The internet seems to be working, it is just either sending packets too small or slow to work (some small pages load bits of them very slowly but then hang). XP seems to have a networking diagnostic app - here is the output: Last diagnostic run time: 08/30/10 08:16:38 IP Configuration Diagnostic Invalid IP address info Valid IP address detected: 192.168.0.10 IP Layer Diagnostic Corrupted IP routing table info The default route is valid info The loopback route is valid info The local host route is valid info The local subnet route is valid Invalid ARP cache entries action The ARP cache has been flushed Gateway Diagnostic Gateway info The following proxy configuration is being used by IE: Automatically Detect Settings:Disabled Automatic Configuration Script: Proxy Server: Proxy Bypass list: info This computer has the following default gateway entry(ies): 192.168.0.1 info This computer has the following IP address(es): 192.168.0.10 info The default gateway is in the same subnet as this computer info The default gateway entry is a valid unicast address info The default gateway address was resolved via ARP in 1 try(ies) info The default gateway was reached via ICMP Ping in 1 try(ies) info TCP port 80 on host 65.55.12.249 was successfully reached info The Internet host www.microsoft.com was successfully reached info The default gateway is OK DNS Client Diagnostic DNS - Not a home user scenario info Using Web Proxy: no info Resolving name ok for (www.microsoft.com): yes No DNS servers DNS failure HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Passive): Successfully connected to ftp.microsoft.com. info HTTP: Successfully connected to www.microsoft.com. warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out error Could not make an HTTPS connection. info Redirecting user to support call WinSock Diagnostic WinSock status info All base service provider entries are present in the Winsock catalog. info The Winsock Service provider chains are valid. info Provider entry MSAFD Tcpip [TCP/IP] passed the loopback communication test. info Provider entry MSAFD Tcpip [UDP/IP] passed the loopback communication test. info Provider entry RSVP UDP Service Provider passed the loopback communication test. info Provider entry RSVP TCP Service Provider passed the loopback communication test. info Connectivity is valid for all Winsock service providers. Wireless Diagnostic Wireless - Service disabled Wireless - User SSID action User input required: Specify network name or SSID Wireless - First time setup info The Wireless Network name (SSID) to which the user would like to connect = RodSof Wifi. Wireless - Radio off info Valid IP address detected: 192.168.0.10 Wireless - Out of range Wireless - Hardware issue Wireless - Novice user Wireless - Ad-hoc network Wireless - Less preferred Wireless - 802.1x enabled Wireless - Configuration mismatch Wireless - Low SNR Network Adapter Diagnostic Network location detection info Using home Internet connection Network adapter identification info Network connection: Name=Local Area Connection 2, Device=Realtek RTL8168C(P)/8111C(P) PCI-E Gigabit Ethernet NIC, MediaType=LAN, SubMediaType=LAN info Network connection: Name=Wireless USB, Device=NETGEAR WG111v2 54Mbps Wireless USB 2.0 Adapter, MediaType=LAN, SubMediaType=WIRELESS info Both Ethernet and Wireless connections available, prompting user for selection action User input required: Select network connection info Wireless connection selected Network adapter status info Network connection status: Connected HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Active): Successfully connected to ftp.microsoft.com. warn HTTP: Error 12007 connecting to www.microsoft.com: The server name or address could not be resolved warn HTTP: Error 12002 connecting to www.hotmail.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out error Could not make an HTTP connection. error Could not make an HTTPS connection.

    Read the article

  • Feedback on iterating over type-safe enums

    - by Sumant
    In response to the earlier SO question "Enumerate over an enum in C++", I came up with the following reusable solution that uses type-safe enum idiom. I'm just curious to see the community feedback on my solution. This solution makes use of a static array, which is populated using type-safe enum objects before first use. Iteration over enums is then simply reduced to iteration over the array. I'm aware of the fact that this solution won't work if the enumerators are not strictly increasing. template<typename def, typename inner = typename def::type> class safe_enum : public def { typedef typename def::type type; inner val; static safe_enum array[def::end - def::begin]; static bool init; static void initialize() { if(!init) // use double checked locking in case of multi-threading. { unsigned int size = def::end - def::begin; for(unsigned int i = 0, j = def::begin; i < size; ++i, ++j) array[i] = static_cast<typename def::type>(j); init = true; } } public: safe_enum(type v = def::begin) : val(v) {} inner underlying() const { return val; } static safe_enum * begin() { initialize(); return array; } static safe_enum * end() { initialize(); return array + (def::end - def::begin); } bool operator == (const safe_enum & s) const { return this->val == s.val; } bool operator != (const safe_enum & s) const { return this->val != s.val; } bool operator < (const safe_enum & s) const { return this->val < s.val; } bool operator <= (const safe_enum & s) const { return this->val <= s.val; } bool operator > (const safe_enum & s) const { return this->val > s.val; } bool operator >= (const safe_enum & s) const { return this->val >= s.val; } }; template <typename def, typename inner> safe_enum<def, inner> safe_enum<def, inner>::array[def::end - def::begin]; template <typename def, typename inner> bool safe_enum<def, inner>::init = false; struct color_def { enum type { begin, red = begin, green, blue, end }; }; typedef safe_enum<color_def> color; template <class Enum> void f(Enum e) { std::cout << static_cast<unsigned>(e.underlying()) << std::endl; } int main() { std::for_each(color::begin(), color::end(), &f<color>); color c = color::red; }

    Read the article

  • Architecture for Qt SIGNAL with subclass-specific, templated argument type

    - by Barry Wark
    I am developing a scientific data acquisition application using Qt. Since I'm not a deep expert in Qt, I'd like some architecture advise from the community on the following problem: The application supports several hardware acquisition interfaces but I would like to provide an common API on top of those interfaces. Each interface has a sample data type and a units for its data. So I'm representing a vector of samples from each device as a std::vector of Boost.Units quantities (i.e. std::vector<boost::units::quantity<unit,sample_type> >). I'd like to use a multi-cast style architecture, where each data source broadcasts newly received data to 1 or more interested parties. Qt's Signal/Slot mechanism is an obvious fit for this style. So, I'd like each data source to emit a signal like typedef std::vector<boost::units::quantity<unit,sample_type> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); for the unit and sample_type appropriate for that device. Since tempalted QObject subclasses aren't supported by the meta-object compiler, there doesn't seem to be a way to have a (tempalted) base class for all data sources which defines the samplesAcquired Signal. In other words, the following won't work: template<T,U> //sample type and units class DataSource : public QObject { Q_OBJECT ... public: typedef std::vector<boost::units::quantity<U,T> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); }; The best option I've been able to come up with is a two-layered approach: template<T,U> //sample type and units class IAcquiredSamples { public: typedef std::vector<boost::units::quantity<U,T> > SampleVector virtual shared_ptr<SampleVector> acquiredData(TimeStamp ts, unsigned long nsamples); }; class DataSource : public QObject { ... signals: void samplesAcquired(TimeStamp ts, unsigned long nsamples); }; The samplesAcquired signal now gives a timestamp and number of samples for the acquisition and clients must use the IAcquiredSamples API to retrieve those samples. Obviously data sources must subclass both DataSource and IAcquiredSamples. The disadvantage of this approach appears to be a loss of simplicity in the API... it would be much nicer if clients could get the acquired samples in the Slot connected. Being able to use Qt's queued connections would also make threading issues easier instead of having to manage them in the acquiredData method within each subclass. One other possibility, is to use a QVariant argument. This necessarily puts the onus on subclass to register their particular sample vector type with Q_REGISTER_METATYPE/qRegisterMetaType. Not really a big deal. Clients of the base class however, will have no way of knowing what type the QVariant value type is, unless a tag struct is also passed with the signal. I consider this solution at least as convoluted as the one above, as it forces clients of the abstract base class API to deal with some of the gnarlier aspects of type system. So, is there a way to achieve the templated signal parameter? Is there a better architecture than the one I've proposed?

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • calling calloc - memory leak valgrind

    - by Mike
    The following code is an example from the NCURSES menu library. I'm not sure what could be wrong with the code, but valgrind reports some problems. Any ideas... ==4803== 1,049 (72 direct, 977 indirect) bytes in 1 blocks are definitely lost in loss record 25 of 36 ==4803== at 0x4C24477: calloc (vg_replace_malloc.c:418) ==4803== by 0x400E93: main (in /home/gerardoj/a.out) ==4803== ==4803== LEAK SUMMARY: ==4803== definitely lost: 72 bytes in 1 blocks ==4803== indirectly lost: 977 bytes in 10 blocks ==4803== possibly lost: 0 bytes in 0 blocks ==4803== still reachable: 64,942 bytes in 262 blocks Source code: #include <curses.h> #include <menu.h> #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) #define CTRLD 4 char *choices[] = { "Choice 1", "Choice 2", "Choice 3", "Choice 4", "Choice 5", "Choice 6", "Choice 7", "Exit", } ; int main() { ITEM **my_items; int c; MENU *my_menu; int n_choices, i; ITEM *cur_item; /* Initialize curses */ initscr(); cbreak(); noecho(); keypad(stdscr, TRUE); /* Initialize items */ n_choices = ARRAY_SIZE(choices); my_items = (ITEM **)calloc(n_choices + 1, sizeof(ITEM *)); for (i = 0; i < n_choices; ++i) { my_items[i] = new_item(choices[i], choices[i]); } my_items[n_choices] = (ITEM *)NULL; my_menu = new_menu((ITEM **)my_items); /* Make the menu multi valued */ menu_opts_off(my_menu, O_ONEVALUE); mvprintw(LINES - 3, 0, "Use <SPACE> to select or unselect an item."); mvprintw(LINES - 2, 0, "<ENTER> to see presently selected items(F1 to Exit)"); post_menu(my_menu); refresh(); while ((c = getch()) != KEY_F(1)) { switch (c) { case KEY_DOWN: menu_driver(my_menu, REQ_DOWN_ITEM); break; case KEY_UP: menu_driver(my_menu, REQ_UP_ITEM); break; case ' ': menu_driver(my_menu, REQ_TOGGLE_ITEM); break; case 10: { char temp[200]; ITEM **items; items = menu_items(my_menu); temp[0] = '\0'; for (i = 0; i < item_count(my_menu); ++i) if(item_value(items[i]) == TRUE) { strcat(temp, item_name(items[i])); strcat(temp, " "); } move(20, 0); clrtoeol(); mvprintw(20, 0, temp); refresh(); } break; } } free_item(my_items[0]); free_item(my_items[1]); free_menu(my_menu); endwin(); }

    Read the article

  • Multidimensional array problem in VHDL?

    - by Nektarios
    I'm trying to use a multidimensional array in VHDL and I'm having a lot of trouble getting it to work properly. My issue is that I've got an array of 17, of 16 vectors, of a given size. What I want to do is create 17 registers that are array of 16 * std_logic_vector of 32 bits (which = my b, 512). So, I'm trying to pass in something to input and output on the register instantiation that tells the compiler/synthesizer that I want to pass in something that is 512 bits worth... Similar to in C if I had: int var[COLS][ROWS][ELEMENTS]; memcpy(&var[3].. // I'm talking about 3rd COL here, passing in memory that is ROWS*ELEMENTS long (My actual declaration is here:) type partial_pipeline_registers_type is array (0 to 16, 0 to 15) of std_logic_vector(iw - 1 downto 0); signal h_blk_pipelined_input : partial_pipeline_registers_type; I tried simply using h_blk_pipelined_input(0) .. up to (16) but this doesn't work. I get the following error, which makes me see that I need to double index in to the array: ERROR:HDLParsers:821 - (at the register) Wrong index type for h_blk_pipelined_input. So then I tried what's below, and I get this error: ERROR:HDLParsers:164 - (at the register code). parse error, unexpected TO, expecting COMMA or CLOSEPAR instantiate_h_pipelined_reg : regn generic map ( N=> b, init => bzeros ) port map ( clk => clk , rst => '0', en => '1', input => h_blk_pipelined_input((i - 1), 0 to 15), output=> h_blk_pipelined_input((i), 0 to 15)); -- Changing 0 to 15 to (0 to 15) has no effect... I'm using XST, and from their documentation (http://www.xilinx.com/itp/xilinx6/books/data/docs/xst/xst0067_9.html), the above should have worked: ...declaration: subtype MATRIX15 is array(4 downto 0, 2 downto 0) of STD_LOGIC_VECTOR (7 downto 0); A multi-dimensional array signal or variable can be completely used: Just a slice of one row can be specified: MATRIX15 (4,4 downto 1) <= TAB_B (3 downto 0); One alternative is that I can create more registers that are 16 times smaller, and instead of trying to do all '0 to 15' at once, I would just do that 15 additional times. However, I think this may lead to inefficiency in synthesis and I don't feel like this is the right solution. EDIT: Tried what Ben said, instantiate_h_m_qa_pipeline_registers: for i in 1 to 16 generate instantiate_h_pipelined_reg : regn generic map ( N=> b, init => bzeros ) port map ( clk => clk , rst => '0', en => '1', input => h_blk_pipelined_input(i - 1), output=> h_blk_pipelined_input(i)); end generate instantiate_h_m_qa_pipeline_registers; The signals are now defined as: type std_logic_block is array (0 to 15) of std_logic_vector(iw - 1 downto 0) ; type partial_pipeline_registers_type is array (0 to 16) of std_logic_block; signal h_blk_pipelined_input : partial_pipeline_registers_type; And the error I get from XST is: ERROR:HDLParsers:800 - ((where the register part is)) Type of input is incompatible with type of h_blk_pipelined_input. I'm able to do everything I was able to do before, using ()() syntax instead of ( , ) so I haven't lost anything going this way, but it still doesn't resolve my problem.

    Read the article

  • OS Missing? Messed up the MBR on Win7 64-bit

    - by hom3lesshom3boy
    I have a Windows 7 machine with two hard drives: a 1TB C: drive and 500GB J:. I had Windows XP installed on C: and Windows 7 installed on J:. I installed Windows 7 after Windows XP from an installer .exe I (legally) bought and downloaded. It, and all of my other files, are sitting on my J: drive intact. While under my Windows 7 install, a few days ago I decided to use Priform's CCleaner and use its DriveWipe utility to wipe the C: drive. 1% into the process, I cancelled and attempted to use it again. It gives me an error saying it can't format the drive, so I poke around the Internet a bit, give up, and restart my computer. I first get an "OS is missing" error after the computer boots past the BIOS. I downloaded and put UBCD on a bootable USB to use another drivewiping tool to completely erase the C: drive, hoping it'll take the problem with it. No luck. I try to use TestDisk to make my J: my primary active drive, but no luck. I still get the "OS is missing" error. Or sometimes it'll hang at Verifying DMI Pool. Or sometimes I'll get the "NTLDR is missing" error. I get hold of Hiren's and put it on another bootable USB. I first I tried the Boot Windows 7 from Hard Drive option, and I get "Error 15: File Not Found". I tried the "Fix 'NTLDR is Missing'" option (I'm not quite sure why this is even showing up, since I'm trying to get into a HDD with Windows 7 installed. Probably messed up somewhere when I used TestDisk) and I get this list: I'll run through the error messages I get: 1st Try - Windows could not start because the following file is missing or corrupt: \system32\hal.dll 2nd Try - Windows could not start because the following file is missing or corrupt: \system32\ntoskrnl.exe 3rd Try - Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware. 4th - 8th Try - Same as #3 9th Try - I/O Error accessing boot sector file multi(0)disk(0)fdisk(0)\BOOTSEC.DOS. And computer freezes. 10th Try - computer restarts Needless to say, not a single one of those works. I then tried to open up the Windows 7 exe I have sitting on my J: from the Mini-XP OS on Hiren's, but it won't run because I'm trying to run a 64-bit file from a 32-bit exe. At least, that's the problem according to these guys: http://social.technet.microsoft.com/...-b2f54e9c7d18/ I then borrowed a 64-bit Windows Home Premium CD from a friend to get to the recovery options. But I get the error message: This version of System Recovery Options is not compatible with the version of Windows you are trying to repair. Try using a recovery disc that is compatible with this version of Windows. I pressed Shift + F10 to get to the Command Prompt directly. These are the exact steps I took from there (paraphrased a little): X:\Sources>bootrec /Fixmbr The operation completed successfully. X:\Sources>bootrec /Fixboot The operation completed successfully. I restarted my computer, but it still didn't work. I unplugged the C: drive, then tried bootrec and Diskpart: X:\Sources> bootrec.exe X:\Sources> bootrec /RebuildBcd Total identified Windows installations: 1 [1] \\?\GLOBALROOT\Device\HarddiskVolume1\Windows Add installation to bootlist? Yes(Y)/No(N)/All(A):y The requested system device cannot be found. X:\Sources>DiskPart DISKPART> List Disk Disk # Status Size Free Dyn Gpt Disk 0_Online_465GB_0B_______* Disk 1 Online 1000MB 0B (this is Hiren's on a bootable usb) DISKPART> Select Disk 0 Disk 0 is now the selected disk. DISKPART> List Partition Partition # Type Size Offset Partition 1 System 465GB 31KB DISKPART> Select Partition 1 Partition 1 is now the selected partition DISKPART> Active The selected disk is not a fixed MBR disk. The ACTIVE command can only be used on fixed MBR disks. DISKPART> exit Leaving Diskpart... X:\Sources>bootrec /Fixmbr The operation completed successfully. X:\Sources>bootrec /Fixboot The operation completed successfully. Before I go any further, is there anything I'm overlooking/doing wrong? All I care about is making the J: and Windows 7 bootable again. SPECS: Windows 7 Professional 64-Bit GIGABYTE - Motherboard - Socket 775 - GA-P35-DS3R (rev. 2.1) Crucial Ballistix 2048MB PC6400 DDR2 800MHz (2x2GB) Intel Core 2 Duo E6700 Processor (2.6 (6GHZ) I think... not sure anymore C: HDD - SAMSUNG HD103UJ (1TB, not plugged in) J: HDD - WDC WD5000AKS-00V1A0 (500GB)

    Read the article

  • My D-Link's Ethernet bridge downlink just got 10-30x slower?

    - by Jay Levitt
    TL;DR: I unplugged my network to move my desk, and now downloading via my DIR-655's Ethernet LAN bridge is 10-30x slower than the Ethernet switch it's plugged into. Background My network is SMC cable modem <-> Cisco firewall <-> Netgear switch <-> D-Link WiFi† | | | | SMC8014 ASA-5505 GS608v2 gigE DIR-655 rev A3 gigE †The DIR-655 is used as an access point, not a router (although what D-Link calls an access point, I'd call a bridge). The "WAN" port is unused; the Netgear connects to the built-in 4-port Ethernet LAN switch, inside the built- in router/firewall. Endpoints: MacBook Pro 17" mid-2010 iPhone 4S Fedora 12 Linux server running reasonably fast dual-Athlon X2, VelociRaptors, etc. All cables are <10 feet, mostly CAT-5e, some CAT-6, all premade. All WiFi endpoints are within three feet of the D-Link. Yesterday I unplugged and rearranged stuff, and now connecting via the D-Link - even through the wired switch, right next to the incoming network cable - is 30x slower than connecting directly to the Netgear switch, on both my MacBook and iPhone. How I'm measuring "slower" I'm mostly using http://speedtest.net, which of course only really measures broadband speeds. I've also installed http://www.speedtest.net/mini.php on my local server, but can't test the iPhone with that. Results Speedtest.net, closest server over Comcast business-class: CONFIG | PING (ms) | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> Netgear | 9 | 31.6 | 6.8 Mac <-> Ethernet <-> D-Link | 8 | 4.1 | 6.0 Mac <-> WiFi <-> D-Link | 9 | 1.4 | 2.9 iPhone <-> WiFi <-> D-Link | 67 | 0.4 | 1.6 Speedtest Mini on Linux PC: CONFIG | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> NetGear | 97.2 | 76.9 Mac <-> Ethernet <-> D-Link | 8.2 | 24.2 Mac <-> WiFi <-> D-Link | 1.0 | 8.6 Slow typing in SSH: Mac <-> Ethernet <-> Netgear <-> Linux PC: smooth Mac <-> Ethernet <-> D-Link <-> Linux PC: choppy Note that D-Link upload speeds are normal on broadband, slower locally (but I'd believe that's a D-Link limitation), and always faster than the downloads! Since ssh is choppy just with slow typing, I don't believe it's a throttling-type problem either; that's not a lot of bandwidth. What I've tried Swapping all "good" and "bad" cables Re-plugging "bad" cable from D-Link to Netgear and watching it be the "good" cable pulling cables away from power lines Verify that the Mac auto-detects the D-Link as gigE Try to verify the link speed of the D-Link <- Netgear connection, but the firmware doesn't report that Verify that the D-Link sees no TX/RX errors or collisions Use different Ethernet ports on both Netgear and D-Link Reset the D-Link to factory settings Upgrade the D-Link firmware from 1.21 to 1.35NA, 2010/11/12, the latest Reboot everything at least once On the Mac, disable Wi-Fi during the Ethernet tests, and unplug Ethernet during the Wi-Fi tests Using iStumbler, verify that the D-Link isn't picking overloaded Wi-Fi channels (usually just 1-5 neighbors on my and adjacent channels, average for my apt building) Verify that the only client connected to the Wi-Fi was the iPhone Verify that nothing was being chatty on my network according to the WISH log Enable and disable all sorts of D-Link settings, including forcing WAN auto-detect to gigE So. I don't mind buying a new access point—I wouldn't mind having a dual-link network—but as a guy who's been networking since gated v4 was a drastic rewrite, and who often used physical sniffers in the days before Wireshark, I'm baffled. I hate being baffled. What could I possibly have changed that would result in this? How can I measure it? All I can think of is a static zap—thick carpet, socks, HVAC—but I didn't feel one, and does that really happen anymore? Can I test if it's Ethernet vs. TCP layer slowness? I'm not familiar with modern network utilities; it's hard to Google without hitting "Q: Why is my network slow? A: Is your microwave on?" If I don't get an answer here, will someone big and powerful help me migrate it to serverfault without getting screamed back here? In the words of Inigo Montoya, "I must know." Don't get all Dread Pirate Roberts on me.

    Read the article

  • How to stop OpenGL from applying blending to certain content? (see pics)

    - by RexOnRoids
    Supporting Info: I use cocos2d to draw a sprite (graph background) on the screen (z:-1). I then use cocos2d to draw lines/points (z:0) on top of the background -- and make some calls to OpenGL blending functions before the drawing to SMOOTH out the lines. Problem: The problem is that: aside from producing smooth lines/points, calling these OpenGL blending functions seems to "degrade" the underlying sprite (graph background). As you can see from the images below, the "degraded" background seems to be made darker and less sharp in Case 2. So there is a tradeoff: I can either have (Case 1) a nice background and choppy lines/points, or I can have (Case 2) nice smooth lines/points and a degraded background. But obviously I need both. THE QUESTION: How do I set OpenGL so as to only apply the blending to the layer with the Lines/Points in it and thus leave the background alone? The Code: I have included code of the draw() method of the CCLayer for both cases explained above. As you can see, the code producing the difference between Case 1 and Case 2 seems to be 1 or 2 lines involving OpenGL Blending. Case 1 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Commenting out the following two lines induces a problem of making it impossible to have smooth lines/points, but has merit in that it does not degrade the background sprite.*/ //glEnable (GL_BLEND); //glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } } Case 2 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Enabling the following two lines gives nice smooth lines/points, but has a problem in that it degrades the background sprite.*/ glEnable (GL_BLEND); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } }

    Read the article

  • HttpClient multithread performance

    - by pepper
    I have an application which downloads more than 4500 html pages from 62 target hosts using HttpClient (4.1.3 or 4.2-beta). It runs on Windows 7 64-bit. Processor - Core i7 2600K. Network bandwidth - 54 Mb/s. At this moment it uses such parameters: DefaultHttpClient and PoolingClientConnectionManager; Also it hasIdleConnectionMonitorThread from http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html; Maximum total connections = 80; Default maximum connections per route = 5; For thread management it uses ForkJoinPool with the parallelism level = 5 (Do I understand correctly that it is a number of working threads?) In this case my network usage (in Windows task manager) does not rise above 2.5%. To download 4500 pages it takes 70 minutes. And in HttpClient logs I have such things: DEBUG ForkJoinPool-2-worker-1 [org.apache.http.impl.conn.PoolingClientConnectionManager]: Connection released: [id: 209][route: {}-http://stackoverflow.com][total kept alive: 6; route allocated: 1 of 5; total allocated: 10 of 80] Total allocated connections do not raise above 10-12, in spite of that I've set it up to 80 connections. If I'll try to rise parallelism level to 20 or 80, network usage remains the same but a lot connection time-outs will be generated. I've read tutorials on hc.apache.org (HttpClient Performance Optimization Guide and HttpClient Threading Guide) but they does not help. Task's code looks like this: public class ContentDownloader extends RecursiveAction { private final HttpClient httpClient; private final HttpContext context; private List<Entry> entries; public ContentDownloader(HttpClient httpClient, List<Entry> entries){ this.httpClient = httpClient; context = new BasicHttpContext(); this.entries = entries; } private void computeDirectly(Entry entry){ final HttpGet get = new HttpGet(entry.getLink()); try { HttpResponse response = httpClient.execute(get, context); int statusCode = response.getStatusLine().getStatusCode(); if ( (statusCode >= 400) && (statusCode <= 600) ) { logger.error("Couldn't get content from " + get.getURI().toString() + "\n" + response.toString()); } else { HttpEntity entity = response.getEntity(); if (entity != null) { String htmlContent = EntityUtils.toString(entity).trim(); entry.setHtml(htmlContent); EntityUtils.consumeQuietly(entity); } } } catch (Exception e) { } finally { get.releaseConnection(); } } @Override protected void compute() { if (entries.size() <= 1){ computeDirectly(entries.get(0)); return; } int split = entries.size() / 2; invokeAll(new ContentDownloader(httpClient, entries.subList(0, split)), new ContentDownloader(httpClient, entries.subList(split, entries.size()))); } } And the question is - what is the best practice to use multi threaded HttpClient, may be there is a some rules for setting up ConnectionManager and HttpClient? How can I use all of 80 connections and raise network usage? If necessary, I will provide more code.

    Read the article

  • Spring MVC working with Web Designers

    - by jboyd
    When working with web-designers in a Spring-MVC and JSP environment, are there any tools or helpful patterns to reduce the pain of going back and forth from HTML to JSP and back? Project requirements dictate that views will change often, and it can be difficult to make changes efficiently because of the amount of Java code that leaks into the view layer. Ideally I'd like to remove almost all Java code from the view, but it does not seem like this works with the Spring/JSP philosophy where often the way to remove Java code is to replace that code with tag libraries, which will still have a similar problem. To provide a little clarity to my question I'm going to include some existing code (inherited by me) to show the kinds of problems I'm likely to face when change the look of our views: <%-- Begin looping through results --%> <% List memberList = memberSearchResults.getResults(); for(int i = start - 1; i < memberList.size() && i < end; i++) { Profile profile = (Profile)memberList.get(i); long profileId = profile.getProfileId(); String nickname = profile.getNickname(); String description = profile.getDescription(); Image image = profile.getAvatarImage(); String avatarImageSrc = null; int avatarImageWidthNum = 0; int avatarImageHeightNum = 0; if(null != image) { avatarImageSrc = image.getSrc(); avatarImageWidthNum = image.getWidth(); avatarImageHeightNum = image.getHeight(); } String bgColor = i % 2 == 1 ? "background-color:#FFF" : ""; %> <div style="float:left;clear:both;padding:5px 0 5px 5px;cursor:pointer;<%= bgColor %>" onclick='window.location="profile.sp?profileId=<%= profileId %>"'> <div style="float:right;clear:right;padding-left:10px;width:515px;font-size:10px;color:#7e7e7e"> <h6><%= nickname %></h6> <%= description %> </div> <img style="float:left;clear:left;" alt="Avatar Image" src="<%= null != avatarImageSrc && avatarImageSrc.trim().length() > 0 ? avatarImageSrc : "images/defaultUserImage.png" %>" <%= avatarImageWidthNum < avatarImageHeightNum ? "height='59'" : "width='92'" %> /> </div> <% } // End loop %> Now, ignoring some of the code smells there, it's obvious that if someone wants to change the look of that DIV it would be neccesary to re-place all the Java/JSP code into the new HTML given (the designers don't work in JSP files, they have their own HTML versions of the website). Which is tedious, and prone to errors.

    Read the article

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

  • Modern alternatives to Java

    - by Ralph
    I have been a Java developer for 14 years and have written an enterprise-level (~500 kloc) Swing application that uses most of the standard library APIs. Recently, I have become disappointed with the progress that the language has made to "modernize" itself, and am looking for an alternative for ongoing development. I have considered moving to the .NET platform, but I have issues with using something the only runs well in Windows (I know about Mono, but that is still far behind Microsoft). I also plan on buying a new Macbook Pro as soon as Apple releases their new rumored Arrandale-based machines and want to develop in an environment that will feel "at home" in Unix/Linux. I have considered using Python or Ruby, but the standard Java library is arguably the largest of any modern language. In JVM-based languages, I looked at Groovy, but am disappointed with its performance. Rumor has it that with the soon-to-be released JDK7, with its InvokeDynamic instruction, this will improve, but I don't know how much. Groovy is also not truly a functional language, although it provides closures and some of the "functional" features on collections. It does not embrace immutability. I have narrowed my search down to two JVM-based alternatives: Scala and Clojure. Each has its strengths and weaknesses. I am looking for opinions. I am not an expert at either of these languages; I have read 2 1/2 books on Scala and am currently reading Stu Halloway's book on Clojure. Scala is strongly statically typed. I know the dynamic language folks claim that static typing is a crutch for not doing unit testing, but it does provide a mechanism for compile-time location of a whole class of errors. Scala is more concise than Java, but not as much as Clojure. Scala's inter-operation with Java seems to be better than Clojure's, in that most Java operations are easier to do in Scala than in Clojure. For example, I can find no way in Clojure to create a non-static initialization block in a class derived from a Java superclass. For example, I like the Apache commons CLI library for command line argument parsing. In Java and Scala, I can create a new Options object and add Option items to it in an initialization block as follows (Java code): final Options options = new Options() { { addOption(new Option("?", "help", false, "Show this usage information"); // other options } }; I can't figure out how to the same thing in Clojure (except by using (doit...)), although that may reflect my lack of knowledge of the language. Clojure's collections are optimized for immutability. They rarely require copy-on-write semantics. I don't know if Scala's immutable collections are implemented using similar algorithms, but Rich Hickey (Clojure's inventor) goes out of his way to explain how that language's data structures are efficient. Clojure was designed from the beginning for concurrency (as was Scala) and with modern multi-core processors, concurrency takes on more importance, but I occasionally need to write simple non-concurrent utilities, and Scala code probably runs a little faster for these applications since it discourages, but does not prohibit, "simple" mutability. One could argue that one-off utilities do not have to be super-fast, but sometimes they do tasks that take hours or days to complete. I know that there is no right answer to this "question", but I thought I would open it up for discussion. Are there other JVM-based languages that can be used for enterprise level development?

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >