Search Results

Search found 131 results on 6 pages for 'chase aucoin'.

Page 5/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Are SQL Injection vulnerabilities in a PHP application acceptable if mod_security is enabled?

    - by Austin Smith
    I've been asked to audit a PHP application. No framework, no router, no model. Pure PHP. Few shared functions. HTML, CSS, and JS all mixed together. I've discovered numerous places where SQL injection would be easily possible. There are other problems with the application (XSS vulnerabilities, rampant inline CSS, code copy-pasted everywhere) but this is the biggest. Sometimes they escape inputs, not using a prepared query or even mysql_real_escape_string(), mind you, but using addslashes(). Often, though, their queries look exactly like this (pasted from their code but with columns and variable names changed): $user = mysql_query("select * from profile where profile_id='".$_REQUEST["profile_id"]."'"); The developers in question claimed that they were unable to hack their application. I tried, and found mod_security to be enabled, resulting in HTTP 406 for some obvious SQL injection attacks. I believe there to be sophisticated workarounds for mod_security, but I don't have time to chase them down. They claim that this is a "conceptual" matter and not a "practical" one since the application can't easily be hacked. Their internal auditor agreed that there were problems, but emphasized the conceptual nature of the issues. They also use this conceptual/practical argument to defend against inline CSS and JS, absence of code organization, XSS vulnerabilities, and massive amounts of repetition. My client (rightly so, perhaps) just wants this to go away so they can launch their product. The site works. You can log in, do what you need to do, and things are visibly functional, if slow. SQL Injection would indeed be hard to do, given mod_security. Further, their talk of "conceptual vs. practical" is rhetorically brilliant, considering that my client doesn't understand web application security. I worry that they've succeeded in making me sound like an angry puritan. In many ways, this is a problem of politics, not technology, but I am at a loss. As a developer, I want to tell them to toss the whole project and start over with a new team, but I face a strong defense from the team that built it and a client who really needs to ship their product. Is my position here too harsh? Even if they fix the SQL Injection and XSS problems can I ever endorse the release of an unmaintainable tangle of spaghetti code?

    Read the article

  • haproxy and tomcat intermittent hangs

    - by Lorin
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -evv jre-6u20-linux-i586.rpm D: opening db environment /var/lib/rpm/Packages joinenv D: opening db index /var/lib/rpm/Packages rdonly mode=0x0 D: locked db index /var/lib/rpm/Packages D: opening db index /var/lib/rpm/Name rdonly mode=0x0 error: package jre-6u20-linux-i586.rpm is not installed D: closed db index /var/lib/rpm/Name D: closed db index /var/lib/rpm/Packages D: closed db environment /var/lib/rpm/Packages [root@host java]# rpm -qi --scripts jre-6u20-linux-i586.rpm package jre-6u20-linux-i586.rpm is not installed [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • haproxy and tomcat intermittent hangs

    - by user7347
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -qi jre Name : jre Relocations: /usr/java Version : 1.6.0_20 Vendor: Sun Microsystems, Inc. Release : fcs Build Date: Mon Apr 12 19:34:13 2010 Install Date: Thu May 6 06:36:17 2010 Build Host: jdk-lin-1586 Group : Development/Tools Source RPM: jre-1.6.0_20-fcs.src.rpm Size : 50708634 License: Sun Microsystems Binary Code License (BCL) Signature : (none) Packager : Java Software <[email protected]> URL : http://java.sun.com/ Summary : Java(TM) Platform Standard Edition Runtime Environment Description : The Java Platform Standard Edition Runtime Environment (JRE) contains everything necessary to run applets and applications designed for the Java platform. This includes the Java virtual machine, plus the Java platform classes and supporting files. The JRE is freely redistributable, per the terms of the included license. [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • CouchDB Errors: "undefined symbol: js_fgets" on Ubuntu 10.04

    - by MattEzell
    Hello, ServerFault Community. I have been wrestling with this problem for weeks without resolution and was hoping that someone here might be able to help. As indicated above, this is in Ubuntu 10.04 (x86) using CouchDB 0.11.0. I have build and installed CouchDB 0.11.0 from source. Everything with the dependencies and with the CouchDB install itself 'goes off without a hitch' - no errors or complaints in the configure, make or install... CouchDB seems to be running 'fine'. I can access Futon without issue and utilize all CouchDB functionality found in Futon... Unfortunately, when I attempt to use any shows/views for an installed Couch App, I get the above js_fgets error before terminal (and the Couch log) fills up with TONS of JSON. Nothing ever renders in the browser, though Firebugs reports. I have used the official instructions (paying special attention to the 10.04 instructions) and have followed pretty much every Google thread that I can find on similar issues. I have chase SpiderMonkey (and Rhino) as well as Erlang as the culprit, but despite reinstalls and tests with these components, I still cannot get past this CouchDB issue on my system... Ideas? Pointers? Suggestions? Has anyone successfully installed and used CouchDB 0.11.0 on an Ubuntu 10.04 system to RUN APPLICATIONS? I have come across multiple individuals who immediately respond 'yes, I have it installed - it works great' only to have them realize in the end (as I did) that just because Futon thinks things are working, doesn't mean CouchDB is properly handling ALL requests. Thank you for your time and assistance!

    Read the article

  • How do I upgrade Windows Server 2008 R2 Standard (OEM Key) to Enterprise (MSDN Key) using DISM?

    - by Tom Crane
    (Originally asked as After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB but now I know what the question really is...) My Dell server came preinstalled with 2008 R2 Standard. I upgraded to Enterprise to take advantage of more than 32GB RAM. This server is purely for dev and testing, so I want to use my MSDN product key for the upgrade. I originally tried to uprade using the MSDN Enterprise key, but it wouldn't have it: dism /online /Set-Edition:ServerEnterprise /ProductKey:[MSDN key] => Error DISM DISM Transmog Provider: PID=5728 Product key is keyed to [], but user requested transmog to [ServerEnterprise] - CTransmogManager::ValidateTransmogrify I tried several things, including changing the current product key to the MSDN one. Eventually I used a KMS generic key which can be found in several technet forum posts. dism /online /Set-Edition:ServerEnterprise /ProductKey:[KMS Generic Key] ... and this appeared to work. I then changed the product key again (using the control panel) to the MSDN key, thinking that was the end of the matter. Only later when tried to start up VMs did I realise I only had 4GB of usable RAM. I didn't make the connection with the licensing changes at this point and went off on a wild goose chase of BIOS settings, memory configurations and the like. Only later when I saw this... http://social.technet.microsoft.com/Forums/en/winserverTS/thread/6debc586-0977-4731-b418-ca1edb34fe8b ...did I make the connection and reapply the KMS Generic key - which gave me all the RAM back. But now I have a system that isn't properly licensed, presumably I won't be able to activate it as it is, so I've got 2 days to enjoy it. With the MSDN key applied, only 4GB RAM is usable. Is there a way round this without a) rebuilding the server from scratch with the MSDN key from the start or b) buying a retail Enterprise license

    Read the article

  • Understanding how IE's SmartScreen works

    - by Kevin Donn
    Today I downloaded an update to our mail server on my dev machine using IE9 on Win7 Pro. I directed IE to save the file on our server's shared drive so I could install it later. When the download finished, IE showed a red banner at the bottom and said that, ".exe is not commonly downloaded and could harm your computer." There were three buttons, "Delete", "Actions", and "View downloads". I selected "Actions" just because I had never seen this before. It showed a "SmartScreen Filter" dialog basically giving three choices: "Don't run this program (recommended)", "Delete program", and "Run anyway". I just canceled the dialog because I didn't want to run it in the first place; I just wanted to download it so I could run it later on the server. So when I did try to run it, it would blow up immediately saying, "Setup was unable to create the directory - Error 5: Access is denied." I tried unblocking the file, "Run as Administrator" even though I already was Administrator, turning off UAC, etc. Cutting to the chase, I finally downloaded the file again, ran WinMerge on the two and it showed they were identical, except the new one ran fine. I went back to my dev machine, downloaded the file through Firefox and then ran it on the server, again fine. But when I tried again through IE, again SmartScreen showed its red banner and somehow clobbered the file even though it was stored on another machine, and WinMerge can't tell the difference between it and a good file. I've looked around on the web for how SmartScreen works, but they all give user-level descriptions of it. What I want to know is, what does it do to that file to make it unrunnable on another machine? Thanks

    Read the article

  • Making a Job Change That's Easy Why Not Try a Career Change

    - by david.talamelli
    A few nights ago I received a comment on one of our blog posts that reminded me of a statistic that I heard a while back. The statistic reflected the change in our views towards work and showed how while people in past generations would stay in one role for their working career - now with so much choice people not only change jobs often but also change careers 4-5 times in their working life. To differentiate between a job change and a career change: when I say job change this could be an IT Sales person moving from one IT Sales role to another IT Sales role. A Career change for example would be that same IT Sales person moving from IT Sales to something outside the scope of their industry - maybe to something like an Engineer or Scuba Dive Instructor. The reason for Career changes can be as varied as the people who make them. Someone's motivation could be to pursue a passion or maybe there is a change in their personal circumstances forcing the change or it could be any other number of reasons. I think it takes courage to make a Career change - it can be easy to stay in your comfort zone and do what you know, but to really push yourself sometimes you need to try something new, it is a matter of making that career transition as smooth as possible for yourself. The comment that was posted is here below (thanks Dean for the kind words they are appreciated). Hi David, I just wanted to let you know that I work for a company called Milestone Search in Melbourne, Victoria Australia. (www.mstone.com.au) We subscribe to your feed on a daily basis and find your blogs both interesting and insightful. Not to mention extremely entertaining. I wonder if you have missed out on getting in journalism as this seems to be something you'd be great at ?: ) Anyways back to my point about changing careers. This could be anything from going from I.T. to Journalism, Engineering to Teaching or any combination of career you can think of. I don't think there ever has been a time where we have had so many opportunities to do so many different things in our working life. While this idea sounds great in theory, putting it into practice would be much harder to do I think. First, in an increasingly competitive job market, employers tend to look for specialists in their field. You may want to make a change but your options may be limited by the number of employers willing to take a chance on someone new to an industry that will likely require a significant investment in time to get brought up to speed. Also, using myself as an example if I was given the opportunity to move into Journalism/Communication/Marketing career from my career as an IT Recruiter - realistically I would have to take a significant pay cut to make this change as my current salary reflects the expertise I have in my current career. I would not immediately be up to speed moving into a new career and would not be able to justify a similar salary. Yes there are transferable skills in any career change, but even though you may have transferable skills you must realise that you will also have a large amount of learning to do which would take time. These are two initial hurdles that I immediately think of, there may be more but nothing is insurmountable. If you work out what you want to do with your working career whatever that may be, you then need to just need to work out the steps to get to your end goal. This is where utilising the power of your networks and using Social Media can come in handy. If you are interested in working somewhere why not proactively take the opportunity to research the industry or company - find out who it is you need to speak to and get in touch with them. We spend so much time working, we should enjoy the work we do and not be afraid to try new things. Waiting for your dream job to fall into your lap or be handed to you on a silver platter is not likely going to happen, so if there is something you do want to do, work out a plan to make it happen and chase after it. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

  • Internal and External Bug-Tracking Setup

    - by devdude
    Most of you certainly use some kind of bugtracker. Maybe internally only, once a customer files a bug via email or phone you add a new ticket by yourself. Sometimes weekly project meetings can be great source of new tickets coming preferably in flavors of excel sheets that the PM on the other side of the table loves to maintain and chase after you. The more advanced (and transparent) version: Allow the customer to file (and see the progress of) his bugs directly into you bugtracker. Systems like JIRA allow you to use profiles to have certain access rights, etc. But now the question: The bug raised by a user not necessary translates into 1 bug in a specific module/method/EJB/class. The version of the (your) web application he uses does not translate into the version of the class that is causing the error. How you maintain the internal part of the ticket with all the nasty techy details and the same time the make-the-user-feel-good ticket (need more info, accepted, in progress,..) ? Creating 2 tickets for internal and external ? Link them ? Any smart recipes to share ?

    Read the article

  • Batch Geocoding with Garmin Mapsource

    - by Mike Trader
    Please Note: I AM NOT LOOKING FOR AN ALTERNATIVE SOLUTION I lost track of this effort years ago but have need to geocode thousands of addresses nightly. I must use the very accurate database sitting on the machine, installed when the Nuvi map update installed Mapsource. When I contacted Garmin years ago, they expressed an interest in providing an API for this, but then I heard nothing and did not follow up. Their database is provided by navtec? I believe. Anyone have experience with that format? I posted on the Garmin Developer forum a while ago, but its a little lethargic over there :) Has anyone done this? Does anyone know how it might be done without an API; meaning database structure and calls? I'll take a solution in any language. Added: Garmin has expressed an interest in making this available to me. They just have not done it. I do not know the database format. I am NOT looking for an online solution or any other "alternative". This question is very specific. Contact Info: MikeTrader2 A T gmail D O T com Added: I offered a 400 pt bounty for this. Jeff Atwood then offered 400pts also. If you would like to see a solution to this, vote up the question and I will chase up Garmin and show there is interest in finally providing this. Please Note: I AM NOT LOOKING FOR AN ALTERNATIVE SOLUTION

    Read the article

  • Child sProc cannot reference a Local temp table created in parent sProc

    - by John Galt
    On our production SQL2000 instance, we have a database with hundreds of stored procedures, many of which use a technique of creating a #TEMP table "early" on in the code and then various inner stored procedures get EXECUTEd by this parent sProc. In SQL2000, the inner or "child" sProc have no problem INSERTing into #TEMP or SELECTing data from #TEMP. In short, I assume they can all refer to this #TEMP because they use the same connection. In testing with SQL2008, I find 2 manifestations of different behavior. First, at design time, the new "intellisense" feature is complaining in Management Studio EDIT of the child sProc that #TEMP is an "invalid object name". But worse is that at execution time, the invoked parent sProc fails inside the nested child sProc. Someone suggested that the solution is to change to ##TEMP which is apparently a global temporary table which can be referenced from different connections. That seems too drastic a proposal both from the amount of work to chase down all the problem spots as well as possible/probable nasty effects when these sProcs are invoked from web applications (i.e. multiuser issues). Is this indeed a change in behavior in SQL2005 or SQL2008 regarding #TEMP (local temp tables)? We skipped 2005 but I'd like to learn more precisely why this is occuring before I go off and try to hack out the needed fixes. Thanks.

    Read the article

  • Can LINQ expression classes implement the observer pattern instead of deferred execution?

    - by Tormod
    Hi. We have issues within an application using a state machine. The application is implemented as a windows service and is iteration based (it "foreaches" itself through everything) and there are myriads of instances being processed by the state machine. As I'm reading the MEAP version of Jon Skeets book "C# in Depth, 2nd ed", I'm wondering if I can change the whole thing to use linq expression instances so that guards and conditions are represented using expression trees. We are building many applications on this state machine engine and would probably greatly benefit from the new Expression tree visualizer in VS 2010 Now, simple example. If I have an expression tree where there is an OR Expression condition with two sub nodes, is there any way that these can implement the observer pattern so that the expression tree becomes event driven? If a condition change, it should notify its parent node (the OR node). Since the OR node then changes from "false" to "true", then it should notify ITS parent and so on. I love the declarative model of expression trees, but the deferred execution model works in opposite direction of the control flow if you want event based "live" conditions. Am I off on a wild goose chase here? Or is there some concept in the BCL that may help me achieve this?

    Read the article

  • DotNetOpenAuth / WebSecurity Basic Info Exchange

    - by Jammer
    I've gotten a good number of OAuth logins working on my site now. My implementation is based on the WebSecurity classes with amends to the code to suit my needs (I pulled the WebSecurity source into mine). However I'm now facing a new set of problems. In my application I have opted to make the user email address the login identifier of choice. It's naturally unique and suits this use case. However, the OAuth "standards" strikes again. Some providers will return your email address as "username" (Google) some will return the display name (Facebook). As it stands I see to options given my particular scenario: Option 1 Pull even more framework source code into my solution until I can chase down where the OpenIdRelyingParty class is actually interacted with (via the DotNetOpenAuth.AspNet facade) and make addition information requests from the OpenID Providers. Option 2 When a user first logs in using an OpenID provider I can display a kind of "complete registration" form that requests missing info based on the provider selected.* Option 2 is the most immediate and probably the quickest to implement but also includes some code smells through having to do something different based on the provider selected. Option 1 will take longer but will ultimately make things more future proof. I will need to perform richer interactions down the line so this also has an edge in that regard. The more I get into the code it does seem that the WebSecurity class itself is actually very limiting as it hides lots of useful DotNetOpenAuth functionality in the name of making integration easier. Andrew (the author of DNOA) has said that the Attribute Exchange stuff happens in the OpenIdRelyingParty class but I cannot see from the DotNetOpenAuth.AspNet source code where this class is used so I'm unsure of what source would need to be pulled into my code in order to enable the functionality I need. Has anyone completely something similar?

    Read the article

  • Are Java developers really paid more than .Net developers? Also, should I just try and work with C?(

    - by Jon
    Ok, so I graduated about a year and a half ago. In college days I used to be really good (and keen) on C/C++. I've even done some multithreading and socket programming in C. Created networked GUI apps for linux using GTK etc.Now,I've been working since I graduated and well, let's say, I've been stereotyped with .net. That's all I have experience with in these years (Vb.net,asp.net,javascript). I'm pretty comfortable with all of the above and also SQL server.a)Cutting to the chase, I've begun to notice how .net jobs don't seem to pay all that well as Java. Jobs seeking similar number of yrs of experience offer almost twice for java developers. Does it mean I should consider switching?b)My heart still goes out for C. As a surrogate I've tried working with VC++ 6.0 (I know they aren't the same!). I've tried my hand with DDK,SDK,COM etc. Should I try finding jobs there or am I too inexperienced. Somehow, I can't find any openings for C. Where are they? Sometimes I get frustrated and feel that maybe I should just go for MS or something. Possibly that'll help me in landing with 'fatter paying jobs' in the field that I wish to work.

    Read the article

  • Debugging instance of another thread altering my data

    - by Mick
    I have a huge global array of structures. Some regions of the array are tied to individual threads and those threads can modify their regions of the array without having to use critical sections. But there is one special region of the array which all threads may have access to. The code that accesses these parts of the array needs to carefully use critical sections (each array element has its own critical section) to prevent any possibility of two threads writing to the structure simultaneously. Now I have a mysterious bug I am trying to chase, it is occurring unpredictably and very infrequently. It seems that one of the structures is being filled with some incorrect number. One obvious explanation is that another thread has accidentally been allowed to set this number when it should be excluded from doing so. Unfortunately it seems close to impossible to track this bug. The array element in which the bad data appears is different each time. What I would love to be able to do is set some kind of trap for the bug as follows: I would enter a critical section for array element N, then I know that no other thread should be able to touch the data, then (until I exit the critical section) set some kind of flag to a debugging tool saying "if any other thread attempts to change the data here please break and show me the offending patch of source code"... but I suspect no such tool exists... or does it? Or is there some completely different debugging methodology that I should be employing.

    Read the article

  • Creating UIButtons

    - by Ralphonzo
    During loadView I am creating 20 UIButtons that I would like to change the title text of depending on the state of a UIPageControl. I have a pre-save plist that is loaded into a NSArray called arrChars on the event of the current page changing, I set the buttons titles to their relevant text title from the array. The code that does this is: for (int i = 1; i < (ButtonsPerPage + 1); i++) { UIButton *uButton = (UIButton *)[self.view viewWithTag:i]; if(iPage == 1) { iArrPos = (i - 1); } else { iArrPos = (iPage * ButtonsPerPage) + (i - 1); } [uButton setAlpha:0]; NSLog(@"Trying: %d of %d", iArrPos, [self.arrChars count]); if (iArrPos >= [self.arrChars count]) { [uButton setTitle: @"" forState: UIControlStateNormal]; } else { NSString *value = [[NSString alloc] initWithFormat:@"%@", [self.arrChars objectAtIndex:iArrPos]]; NSLog(@"%@", value); [uButton setTitle: [[NSString stringWithFormat:@"%@", value] forState: UIControlStateNormal]; [value release]; //////Have tried: //////[uButton setTitle: value forState: UIControlStateNormal]; //////Have also tried: //////[uButton setTitle: [self.arrChars objectAtIndex:iArrPos] forState: UIControlStateNormal]; //////Have also also tried: //////[uButton setTitle: [[self.arrChars objectAtIndex:iArrPos] autorelease] forState: UIControlStateNormal]; } [uButton setAlpha:1]; } When setting the Title of a button it does not appear to be autoreleasing the previous title and the allocation goes up and up. What am I doing wrong? I have been told before that tracking things by allocations is a bad way to chase leaks because as far as I can see, the object is not leaking in Instruments but my total living allocations continue to climb until I get a memory warning. If there is a better way to track there I would love to know.

    Read the article

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • Robust way to save/load objects with dependencies?

    - by mrteacup
    I'm writing an Android game in Java and I need a robust way to save and load application state quickly. The question seems to apply to most OO languages. To understand what I need to save: I'm using a Strategy pattern to control my game entities. The idea is I have a very general Entity class which e.g. stores the location of a bullet/player/enemy and I then attach a Behaviour class that tells the entity how to act: class Entiy { float x; float y; Behavior b; } abstract class Behavior { void update(Entity e); {} // Move about at a constant speed class MoveBehavior extends Behavior { float speed; void update ... } // Chase after another entity class ChaseBehavior extends Behavior { Entity target; void update ... } // Perform two behaviours in sequence class CombineBehavior extends Behavior { Behaviour a, b; void update ... } Essentially, Entity objects are easy to save but Behaviour objects can have a semi-complex graph of dependencies between other Entity objects and other Behaviour objects. I also have cases where a Behaviour object is shared between entities. I'm willing to change my design to make saving/loading state easier, but the above design works really well for structuring the game. Anyway, the options I've considered are: Use Java serialization. This is meant to be really slow in Android (I'll profile it sometime). I'm worried about robustness when changes are made between versions however. Use something like JSON or XML. I'm not sure how I would cope with storing the dependencies between objects however. Would I have to give each object a unique ID and then use these IDs on loading to link the right objects together? I thought I could e.g. change the ChaseBehaviour to store a ID to an entity, instead of a reference, that would be used to look up the Entity before performing the behaviour. I'd rather avoid having to write lots of loading/saving code myself as I find it really easy to make mistakes (e.g. forgetting to save something, reading things out in the wrong order). Can anyone give me any tips on good formats to save to or class designs that make saving state easier?

    Read the article

  • Error during GENERAL_REQUEST_ENTITY for POST results in ASP .NET session state never getting unlocked

    - by Jesse
    I have been trying to chase down the root cause of a condition where ASP .NET session state remains locked after a web request has been terminated due to an unexpected error. We use the SQL Server session state provider for session because we have several servers in a web farm. This issue first presented itself in the form of many requests getting stuck on the 'AcquireRequestState' event of their lifecycle for no apparent reason. I was able to finding corresponding entries for these requests in the session state database in SQL server that were all locked (column Locked = 1). I was also able to correlate these requests to entries in the IIS log with HTTP status codes of 500 (with a sub status of 0). These findings lead me to believe that, in some cases, a request was erroring out but was NOT releasing its lock on session state like it should. I enabled Failed Request Tracing in IIS for the website in question for status code 500 with all available providers selected each with the 'Verbose' setting for verbosity. I've since gathered several failed traces that have caused permanently locked ASP .NET sessions. They all share the same characteristics: They are all 'POST' requests where the browser is posting data to be processed/saved. They all have events indicating that the 'Session' module was invoked during the REQUEST_ACQUIRE_STATE event. At this point the request would have marked the row in the session state database as being "locked". This is normal and expected. They all have GENERAL_READ_ENTITY_START, GENERAL_READ_ENTITY_END, and GENERAL_REQUEST_ENTITY entries that appear to be reading in the data that was posted to the server as part of the request. This appears to be a buffered operation as these events get repeated over and over with each one reading in some subset of the posted data. At some point during the 'read entity' related events and error occurs. Some have the error code "Incorrect function. (0x80070001)" and others have "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)". Once the error has been encountered, they all jump directly to the END_REQUEST events. The issue here is that, under normal circumstances, there should be a RELEASE_REQUEST_STATE event that will allow the Session module to release the lock it has on the session. This event is being skipped in this scenario. Just to be sure, I enabled failed request tracing for the '200' status code as well and generated several traces of successful requests that do have the RELEASE_REQUEST_STATE event being handled by the Session module. My theory at this point is that some kind of network issue is causing the 'Incorrect function' and 'I/O operation has been aborted because of either a thread exit or an application request' errors, but I don't understand why this seems to be causing the request handling to skip over the RELEASE_REQUEST_STATE event. If the request went through REQUEST_ACQUIRE_STATE it seems like it should also hit RELEASE_REQUEST_STATE as well. I'm loathe to say that this is a bug in IIS or ASP .NET, but it certainly appears that way to me at this point. Are there any configuration changes I could make to help ensure that 'RELEASE_REQUEST_STATE' is fired under all error conditions?

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • WPA2 authentication fails using USB network devices (Linksys and Rosewill)

    - by Greg Youtz
    Decided to reduce the clutter in the house and replace a wired connection with a wireless one on my wife's system using USB network device Rosewill RNX-X1. I can see and connect to unprotected network, but WPA2 authentication repeatedly fails. Tried the same with a Linksys USB network adapter. Both failed to authenticate. Worth noting that I recently switched from Comcast to CenturyLink and so switched routers. The system connected successfully to previous router (Linksys EA4500) using WPA2. Would think it is the router (Actiontec C1000A) but all other devices (TV, iPad, Windows, Blackberry, and Squeezebox) connect ok. Would appreciate some diagnostic guidance and insight (phrased for a newbie!) Tests to date: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 01 serial: 00:e0:4d:30:40:a1 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:47 ioport:ac00(size=256) memory:fdcff000-fdcfffff memory:fdb00000-fdb1ffff *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan1 serial: 00:02:6f:bd:30:a0 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.2.0-31-generic firmware=0.29 link=no multicast=yes wireless=IEEE 802.11bgn sudo lspci -v 00:00.0 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 Capabilities: [44] HyperTransport: Slave or Primary Interface Capabilities: [dc] HyperTransport: MSI Mapping Enable+ Fixed- 00:01.0 ISA bridge: NVIDIA Corporation MCP67 ISA Bridge (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 00:01.1 SMBus: NVIDIA Corporation MCP67 SMBus (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: 66MHz, fast devsel, IRQ 11 I/O ports at fc00 [size=64] I/O ports at 1c00 [size=64] I/O ports at 1c40 [size=64] Capabilities: [44] Power Management version 2 Kernel driver in use: nForce2_smbus Kernel modules: i2c-nforce2 00:01.2 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Flags: 66MHz, fast devsel 00:02.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 Memory at fe02f000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:02.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe02e000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:04.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fe02d000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:04.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 20 Memory at fe02c000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:06.0 IDE interface: NVIDIA Corporation MCP67 IDE Controller (rev a1) (prog-if 8a [Master SecP PriP]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1] [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1] I/O ports at f000 [size=16] Capabilities: [44] Power Management version 2 Kernel driver in use: pata_amd Kernel modules: pata_amd 00:07.0 Audio device: NVIDIA Corporation MCP67 High Definition Audio (rev a1) Subsystem: Biostar Microtech Int'l Corp Device 820c Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe024000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [6c] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:08.0 PCI bridge: NVIDIA Corporation MCP67 PCI Bridge (rev a2) (prog-if 01 [Subtractive decode]) Flags: bus master, 66MHz, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000c000-0000cfff Memory behind bridge: fdf00000-fdffffff Prefetchable memory behind bridge: fd000000-fd0fffff Capabilities: [b8] Subsystem: NVIDIA Corporation Device cb84 Capabilities: [8c] HyperTransport: MSI Mapping Enable- Fixed- 00:09.0 IDE interface: NVIDIA Corporation MCP67 AHCI Controller (rev a2) (prog-if 85 [Master SecO PriO]) Subsystem: Biostar Microtech Int'l Corp Device 5407 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 I/O ports at 09f0 [size=8] I/O ports at 0bf0 [size=4] I/O ports at 0970 [size=8] I/O ports at 0b70 [size=4] I/O ports at dc00 [size=16] Memory at fe02a000 (32-bit, non-prefetchable) [size=8K] Capabilities: [44] Power Management version 2 Capabilities: [8c] SATA HBA v1.0 Capabilities: [b0] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [cc] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: ahci 00:0b.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 0000b000-0000bfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fddfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0c.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=0 I/O behind bridge: 0000a000-0000afff Memory behind bridge: fdc00000-fdcfffff Prefetchable memory behind bridge: 00000000fdb00000-00000000fdbfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0d.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 I/O behind bridge: 00009000-00009fff Memory behind bridge: fda00000-fdafffff Prefetchable memory behind bridge: 00000000fd900000-00000000fd9fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0e.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=05, subordinate=05, sec-latency=0 I/O behind bridge: 00008000-00008fff Memory behind bridge: fd800000-fd8fffff Prefetchable memory behind bridge: 00000000fd700000-00000000fd7fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0f.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=06, subordinate=06, sec-latency=0 I/O behind bridge: 00007000-00007fff Memory behind bridge: fd600000-fd6fffff Prefetchable memory behind bridge: 00000000fd500000-00000000fd5fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:10.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=07, subordinate=07, sec-latency=0 I/O behind bridge: 00006000-00006fff Memory behind bridge: fd400000-fd4fffff Prefetchable memory behind bridge: 00000000fd300000-00000000fd3fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:11.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=08, subordinate=08, sec-latency=0 I/O behind bridge: 00005000-00005fff Memory behind bridge: fd200000-fd2fffff Prefetchable memory behind bridge: 00000000fd100000-00000000fd1fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:12.0 VGA compatible controller: NVIDIA Corporation C68 [GeForce 7050 PV / nForce 630a] (rev a2) (prog-if 00 [VGA controller]) Subsystem: Biostar Microtech Int'l Corp Device 1406 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fb000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at fc000000 (64-bit, non-prefetchable) [size=16M] [virtual] Expansion ROM at 80000000 [disabled] [size=128K] Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration Flags: fast devsel Capabilities: [80] HyperTransport: Host or Secondary Interface 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map Flags: fast devsel 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller Flags: fast devsel 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control Flags: fast devsel Capabilities: [f0] Secure device <?> Kernel driver in use: k8temp Kernel modules: k8temp 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Subsystem: Biostar Microtech Int'l Corp Device 2305 Flags: bus master, fast devsel, latency 0, IRQ 47 I/O ports at ac00 [size=256] Memory at fdcff000 (64-bit, non-prefetchable) [size=4K] [virtual] Expansion ROM at fdb00000 [disabled] [size=128K] Capabilities: [40] Power Management version 2 Capabilities: [48] Vital Product Data Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] Express Endpoint, MSI 00 Capabilities: [84] Vendor Specific Information: Len=4c <?> Capabilities: [100] Advanced Error Reporting Capabilities: [12c] Virtual Channel Capabilities: [148] Device Serial Number 32-00-00-00-10-ec-81-68 Capabilities: [154] Power Budgeting <?> Kernel driver in use: r8169 Kernel modules: r8169 sudo rfkill list all 2: phy2: Wireless LAN Soft blocked: no Hard blocked: no Would appreciate insight on how to chase this down.

    Read the article

  • Game AI: Pattern for implementing Sense-Think-Act components?

    - by Rosarch
    I'm developing a game. Each entity in the game is a GameObject. Each GameObject is composed of a GameObjectController, GameObjectModel, and GameObjectView. (Or inheritants thereof.) For NPCs, the GameObjectController is split into: IThinkNPC: reads current state and makes a decision about what to do IActNPC: updates state based on what needs to be done ISenseNPC: reads current state to answer world queries (eg "am I being in the shadows?") My question: Is this ok for the ISenseNPC interface? public interface ISenseNPC { // ... /// <summary> /// True if `dest` is a safe point to which to retreat. /// </summary> /// <param name="dest"></param> /// <param name="angleToThreat"></param> /// <param name="range"></param> /// <returns></returns> bool IsSafeToRetreat(Vector2 dest, float angleToThreat, float range); /// <summary> /// Finds a new location to which to retreat. /// </summary> /// <param name="angleToThreat"></param> /// <returns></returns> Vector2 newRetreatDest(float angleToThreat); /// <summary> /// Returns the closest LightSource that illuminates the NPC. /// Null if the NPC is not illuminated. /// </summary> /// <returns></returns> ILightSource ClosestIlluminatingLight(); /// <summary> /// True if the NPC is sufficiently far away from target. /// Assumes that target is the only entity it could ever run from. /// </summary> /// <returns></returns> bool IsSafeFromTarget(); } None of the methods take any parameters. Instead, the implementation is expected to maintain a reference to the relevant GameObjectController and read that. However, I'm now trying to write unit tests for this. Obviously, it's necessary to use mocking, since I can't pass arguments directly. The way I'm doing it feels really brittle - what if another implementation comes along that uses the world query utilities in a different way? Really, I'm not testing the interface, I'm testing the implementation. Poor. The reason I used this pattern in the first place was to keep IThinkNPC implementation code clean: public BehaviorState RetreatTransition(BehaviorState currentBehavior) { if (sense.IsCollidingWithTarget()) { NPCUtils.TraceTransitionIfNeeded(ToString(), BehaviorState.ATTACK.ToString(), "is colliding with target"); return BehaviorState.ATTACK; } if (sense.IsSafeFromTarget() && sense.ClosestIlluminatingLight() == null) { return BehaviorState.WANDER; } if (sense.ClosestIlluminatingLight() != null && sense.SeesTarget()) { NPCUtils.TraceTransitionIfNeeded(ToString(), BehaviorState.ATTACK.ToString(), "collides with target"); return BehaviorState.CHASE; } return currentBehavior; } Perhaps the cleanliness isn't worth it, however. So, if ISenseNPC takes all the params it needs every time, I could make it static. Is there any problem with that?

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >