Search Results

Search found 1086 results on 44 pages for 'switches'.

Page 35/44 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Slow Routing Over LAN (Wired)

    - by reverendj1
    I'm having issues with my router acting very slow (Adtran Netvanta 3458). We have two networks, let's call them A and B. When I run netperf from two servers on network A (no routing) I get speeds along the lines of 900 Mbps. Which makes sense, since we have all 1Gbps switches. When testing A to B (or vice-versa) I get speeds along the lines of 22Mbps. I have also tested connecting my laptop to the switchports on the router, and testing two servers on network A (no routing) and got speeds around 90 Mbps. Which makes sense since the switchports on the router are 100Mbps. Does anyone have any idea why routing would be so slow? We bought the router over a year ago, and we think it has been doing this since then, but we never actually tested it before. (network B isn't really used much, so we didn't notice much) We were implementing a site-to-site VPN and noticed it was ridiculously slow, so we started testing basic routing performance. I have ruled out cabling and router CPU/memory utilization. Adtran looked at my config, but didn't see anything wrong with it.

    Read the article

  • some HTTPS sites getting blocked on one machine in network

    - by shadowfoxmi
    I have a few computers connected to the internet via a router. I have been having some trouble with this one Windows 7 desktop. I can browse most of the sites without any trouble but some sites where the sign in page switches to a secure connection (https), the page does not load. It's not all of the sites though. I'm able to sign into gmail and a few other services that I know use https . The sites I'm having trouble with; yahoo's sign in page and the one that I have been using to test across different systems, http://iforgot.apple.com (which switchs to https) ;this particular site, i can access from other computers on the network and my phone. I only have windows firewall running and AVG. I even tried to stopping windows firewall but it did not help. Everything was fine last week. All I have installed in the past week is VOIP softwares namely skype, ooVoo and windows live messenger. I'm not sure how to find out what's being blocked and why and how to unblock it? Any suggestions would be greatly appreciated.

    Read the article

  • Seeking faster access/transfer times for accounting application

    - by Markaway
    Our accounting software, Sage 50, has been getting slower to open on workstations and reading the company file. The company file only contains 2 years worth of transactions, and we just cleared out 2011 so the file size has gotten a lot smaller. There are 10 users, 6 of which are on it all day, 4 are on and off throughout the day. Our network is entirely GbE and the switches are set to prioritize traffic on that port number. Watching network traffic, we barely use 40% of the network capability on the workstation, so I don't think that is our bottleneck. Our server contains two older Raptors Sata 2(3GB/s) 150GB in RAID 1. We were considering switching to SSD's, but a lot of what I read says to stay away from MLC's, especially for production environment and definitely avoid putting them in a RAID config. So would upgrading to newer Raptors with SATA 3(6GB/s) offer noticable benefits? What other options are out there that aren't so expensive? Trying to keep it to 200-300 per drive. We need at least 150GB, but going to 250-300GB would be better as it gives us more room to grow. We have about 30% space remaining on what we have now.

    Read the article

  • Raspberry Pi entrance signed backed by Umbraco - Part 1

    - by Chris Houston
    Being experts on all things Umbraco, we jumped at the chance to help our client, QV Offices, with their pressing signage predicament. They needed to display a sign in the entrance to their building and approached us for our advice. Of course it had to be electronic: displaying multiple names of their serviced office clients, meeting room bookings and on-the-pulse promotions. But with a winding Victorian staircase and minimal storage space how could the monitor be run, updated and managed? That’s where we came in…Raspberry PiUmbraco CMSAutomatic updatesAutomated monitor of the signPower saving when the screen is not in useMounting the screenThe screen that has been used is a standard LED low energy Full HD screen and has been mounted on the wall using it's VESA mounting points, as the wall is a stud wall we were able to add an access panel behind the screen to feed through the mains, HDMI and sensor cables.The Raspberry Pi is then tucked away out of sight in the main electrical cupboard which just happens to be next to the sign, we had an electrician add a power point inside this cupboard to allow us to power the screen and the Raspberry Pi.Designing the interface and editing the contentAlthough a room sign was the initial requirement from QV Offices, their medium term goal has always been to add online meeting booking to their website and hence we suggested adding information about the current and next day's meetings to the sign that would be pulled directly from their online booking system.We produced the design and built the web page to fit exactly on a 1920 x 1080 screen (Full HD in Portrait)As you would expect all the information can be edited via an Umbraco CMS, they are able to add floors, rooms, clients and virtual clients as well as add meeting bookings to their meeting diary.How we configured the Raspberry PiAfter receiving a new Raspberry Pi we downloaded the latest release of Raspbian operating system and followed the official guide which shows how to copy the OS onto an SD card from a Mac, we then followed the majority of steps on this useful guide: 10 Things to Do After Buying a Raspberry Pi.Installing ChromiumWe chose to use the Chromium web browser which for those who do not know is the open sourced version of Google Chrome. You can install this from the terminal with the following command:sudo apt-get install chromium-browserInstalling UnclutterWe found this little application which automatically hides the mouse pointer, it is used in the script below and is installed using the following command:sudo apt-get install unclutterAuto start Chromium and disabling the screen saver, power saving and mouseWhen the Raspberry Pi has been installed it will not have a keyboard or mouse and hence if their was a power cut we needed it to always boot and re-loaded Chromium with the correct URL.Our preferred command line text editor is Nano and I have assumed you know how to use this editor or will be able to work it out pretty quickly.So using the following command:sudo nano /etc/xdg/lxsession/LXDE/autostartWe then changed the autostart file content to:@lxpanel --profile LXDE@pcmanfm --desktop --profile LXDE@xscreensaver -no-splash@xset s off@xset -dpms@xset s noblank@chromium --kiosk --incognito http://www.qvoffices.com/someURL@unclutter -idle 0The first few commands turn off the screen saver and power saving, we then open Cromium in Kiosk Mode (full screen with no menu etc) and pass in the URL to use (I have changed the URL in this example) We found a useful blog post with the Cromium command line switches.Finally we also open an application called Unclutter which auto hides the mouse after 0 seconds, so you will never see a mouse on the sign.We also had to edit the following file:sudo nano /etc/lightdm/lightdm.confAnd added the following line under the [SeatDefault] section:xserver-command=X -s 0 dpmsRefreshing the screenWe decided to try and add a scheduled task that would trigger Chromium to reload the page, at some point in the future we might well change this to using Javascript to update the content, but for now this works fine.First we installed the XDOTool which enables you to script Keyboard commands:sudo apt-get install xdotoolWe used the Refreshing Chromium Browser by Shell Script post as a reference and created the following shell script (which we called refreshing.sh):export DISPLAY=":0"WID=$(xdotool search --onlyvisible --class chromium|head -1)xdotool windowactivate ${WID}xdotool key ctrl+F5This selects the correct display and then sends a CTRL + F5 to refresh Chromium.You will need to give this file execute permissions:chmod a=rwx refreshing.shNow we have the script file setup we just need to schedule it to call this script periodically which is done by using Crontab, to edit this you use the following command:crontab -eAnd we added the following:*/5 * * * * DISPLAY=":.0" /home/pi/scripts/refreshing.sh >/home/pi/cronlog.log 2>&1This calls our script every 5 minutes to refresh the display and it logs any errors to the cronlog.log file.SummaryQV Offices now have a richer and more manageable booking system than they did before we started, and a great new sign to boot.How could we make sure that the sign was running smoothly downstairs in a busy office centre? A second post will follow outlining exactly how Vizioz enabled QV Offices to monitor their sign simply and remotely, from the comfort of their desks.

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • Ubuntu 10.04 & IBM DS3524 with FC multipath, inactive path is [failed][faulty] instead of [active][ghost]

    - by Graeme Donaldson
    OK, this is my setup: FC Switches IBM/Brocade, Switch1 and Switch2, independent fabrics. Server IBM x3650 M2, 2x QLogic QLE2460, 1 connected to each FC Switch. Storage IBM DS3524, 2x controllers with 4x FC ports each, but only 2x connected on each. +-----------------------------------------------------------------------+ | HBA1 Server HBA2 | +-----------------------------------------------------------------------+ | | | | | | +-----------------------------+ +------------------------------+ | Switch1 | | Switch2 | +-----------------------------+ +------------------------------+ | | | | | | | | | | | | | | | | | | | | +-----------------------------------+-----------------------------------+ | Contr A, port 3 | Contr A, port 4 | Contr B, port 3 | Contr B, port 4 | +-----------------------------------+-----------------------------------+ | Storage | +-----------------------------------------------------------------------+ My /etc/multipath.conf is from the IBM redbook for the DS3500, except I use a different setting for prio_callout, IBM uses /sbin/mpath_prio_tpc, but according to http://changelogs.ubuntu.com/changelogs/pool/main/m/multipath-tools/multipath-tools_0.4.8-7ubuntu2/changelog, this was renamed to /sbin/mpath_prio_rdac, which I'm using. devices { device { #ds3500 vendor "IBM" product "1746 FAStT" hardware_handler "1 rdac" path_checker rdac failback 0 path_grouping_policy multibus prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } multipaths { multipath { wwid xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx alias array07 path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback "5" rr_weight priorities no_path_retry "5" } } The output of multipath -ll with controller A as the preferred path: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ 5:0:2:0 sde 8:64 [active][ready] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ 6:0:2:0 sdh 8:112 [failed][faulty] If I change the preferred path using IBM DS Storage Manager to Controller B, the output swaps accordingly: root@db06:~# multipath -ll sdd: checker msg is "directio checker reports path is down" sde: checker msg is "directio checker reports path is down" array07 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) dm-2 IBM ,1746 FASt [size=4.9T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=2][active] \_ 5:0:1:0 sdd 8:48 [failed][faulty] \_ 5:0:2:0 sde 8:64 [failed][faulty] \_ 6:0:1:0 sdg 8:96 [active][ready] \_ 6:0:2:0 sdh 8:112 [active][ready] According to IBM, the inactive path should be "[active][ghost]", not "[failed][faulty]". Despite this, I don't seem to have any I/O issues, but my syslog is being spammed with this every 5 seconds: Jun 1 15:30:09 db06 multipathd: sdg: directio checker reports path is down Jun 1 15:30:09 db06 kernel: [ 2350.282065] sd 6:0:2:0: [sdh] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:09 db06 kernel: [ 2350.282071] sd 6:0:2:0: [sdh] Sense Key : Illegal Request [current] Jun 1 15:30:09 db06 kernel: [ 2350.282076] sd 6:0:2:0: [sdh] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:09 db06 kernel: [ 2350.282083] sd 6:0:2:0: [sdh] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:09 db06 kernel: [ 2350.282092] end_request: I/O error, dev sdh, sector 0 Jun 1 15:30:10 db06 multipathd: sdh: directio checker reports path is down Jun 1 15:30:14 db06 kernel: [ 2355.312270] sd 6:0:1:0: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 1 15:30:14 db06 kernel: [ 2355.312277] sd 6:0:1:0: [sdg] Sense Key : Illegal Request [current] Jun 1 15:30:14 db06 kernel: [ 2355.312282] sd 6:0:1:0: [sdg] <<vendor>> ASC=0x94 ASCQ=0x1ASC=0x94 ASCQ=0x1 Jun 1 15:30:14 db06 kernel: [ 2355.312290] sd 6:0:1:0: [sdg] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 1 15:30:14 db06 kernel: [ 2355.312299] end_request: I/O error, dev sdg, sector 0 Does anyone know how I can get the inactive path to show "[active][ghost]" instead of "[failed][faulty]"? I assume that once I can get that right then the spam in my syslog will end as well. One final thing worth mentioning is that the IBM redbook doc targets SLES 11 so I'm assuming there's something a little different under Ubuntu that I just haven't figured out yet. Update: As suggested by Mitch, I've tried removing /etc/multipath.conf, and now the output of multipath -ll looks like this: root@db06:~# multipath -ll sdg: checker msg is "directio checker reports path is down" sdh: checker msg is "directio checker reports path is down" xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxdm-1 IBM ,1746 FASt [size=4.9T][features=0][hwhandler=0] \_ round-robin 0 [prio=1][active] \_ 5:0:2:0 sde 8:64 [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 5:0:1:0 sdd 8:48 [active][ready] \_ round-robin 0 [prio=0][enabled] \_ 6:0:1:0 sdg 8:96 [failed][faulty] \_ round-robin 0 [prio=0][enabled] \_ 6:0:2:0 sdh 8:112 [failed][faulty] So its more or less the same, with the same message in the syslog every 5 minutes as before, but the grouping has changed.

    Read the article

  • So No TECH job so far.

    - by Ratman21
    O I found some temp work for the US Census and I have managed to keep the house (so far) but, it looks like I/we are going to have to do a short sale and the temp job will be ending soon.   On top of that it looks like the unemployment fund for me is drying up. I will have about one month left after the Census job is done. I am now down to Appling for work at the KFC.   This is type a work I started with, before I was a tech geek and really I didn’t think I would be doing this kind of work in my later years but, I have a wife and kid. So I got to suck it up and do it.   Oh and here is my new resume…go ahead I know you want to tare it up. I really don’t care any more.   Scott L. Newman 45219 Dutton Way, Callahan, FL32011 H: (904)879-4880 C: (352)356-0945 E: [email protected] Web:  http://beingscottnewman.webs.com/                                                       ______                                                                                 OBJECTIVE To obtain a Network or Technical support position     KEYWORD SUMMARY CompTIA A+, Network+, and Security+ Certified., Network Operation, Technical Support, Client/Vendor Relations, Networking/Administration, Cisco Routers/Switches, Helpdesk, Microsoft Office Suite, Website Design/Dev./Management, Frame Relay, ISDN, Windows NT/98/XP, Visio, Inventory Management, CICS, Programming, COBOL IV, Assembler, RPG   QUALIFICATIONS SUMMARY Twenty years’ experience in computer operations, technical support, and technical writing. Also have two and half years’ experience in internet / intranet operations.   PROFESSIONAL EXPERIENCE October 2009 – Present*   Volunteer Web site and PC technician – Part time       True Faith Christian Fellowship Church – Callahan, FL, Project: Create and maintain web site for Church to give it a worldwide exposure Aug 2008 – September 2009:* Volunteer Church sound and video technician – Part time      Thomas Creek Baptist Church – Callahan, FL   *Note Jobs were for the learning and/or keeping updated on skills, while looking for a tech job and training for new skills.   February 2005 to October 2008: Client Server Dev/Analyst I, Fidelity National Information Services, Jacksonville, FL (FNIS acquired Certegy in 2005 and out of 20 personal, was one of three kept on.) August 2003 to February 2005: Senior NetOps Operator, Certegy, St.Pete, Fl. (August 2003, Certegy terminated contract with EDS and out of 40 personal, was one of six kept on.) Projects: Creation and update of listing and placement for all raised floor equipment at St.Pete site. Listing was made up of, floor plan of the raised floor and equipment racks diagrams showing the placement of all devices using Visio. This was cross-referenced with an inventory excel document showing what dept was responsible for each device. Sole creator of Network operation and Server Operation procedures guide (NetOps Guide).  Expertise: Resolving circuit and/or router issues or assist circuit carrier in resolving issue from the company Network Operation Center (NOC). As well as resolving application problems or assist application support in resolution of it.     July 1999 to August 2003: Senior NetOps Operator,EDS (Certegy Account), St.Pete, FL Same expertise and on going projects as listed above for FNIS/Certegy. (Equifax outsourced the NetOps dept. to EDS in 1999)         January 1991 to July 1999: NetOps/Tandem Operator, Equifax, St.Pete & Tampa, FL Same as all of the above for FNIS/Certegy/EDS except for circuit and router issues   EDUCATION ? New Horizons Computer Learning Center, Jacksonville, Florida - CompTIA A+, Security+, and     Network+ Certified.                        Currently working on CCNA Certification 07/30/10 ? Mott Community College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese

    Read the article

  • Home Energy Management & Automation with Windows Phone 7

    A number of people at Clarity are personally interested in home energy conservation and home automation. We feel that a mobile device is a great fit for bringing this idea to fruition. While this project is merely a concept and not directly associated with Microsofts Hohm web service, it provides a great model for communicating the concept. I wanted to take the idea a step further and combine saving energy in your home with the ability to track water usage and control your home devices. I designed an application that focuses on total home control and not just energy usage. Application Overview By monitoring home consumption in real time and with yearly projections users can pinpoint vampire devices, times of high or low consumption, and wasteful patterns of energy use. Energy usage meters indicate total current consumption as well as individual device consumption. Users can then use the information to take action, make adjustments, and change their consumption behaviors. The app can be used to automate certain systems like lighting, temperature, or alarms. Other features can be turned on an off at the touch of a toggle switch on your phone, away from home. Forget to turn off the TV or shut the garage door? No problem, you can do it from your phone. Through settings you can enable and disable features of the phone that apply to your home making it a completely customized and convenient experience. To be clear, this equates to more security, big environmental impact, and even bigger savings.   Design and User Interface  Since this panorama application is designed for win phone 7 devices, it complies with the UI Design and Interaction Guide for wp7. I developed the frame and page hierarchy from existing examples. The interface takes advantage of the interactive nature of touch screens with slider controls, pivot control views, and toggle switches to turn on and off devices (not shown in mockup). I followed recommendations for text based elements and adapted the tile notifications to display the most recent user activity. For example, the mockup indicates upon launching the app that the last thing you did was program the thermostat. This model is great for quick launching common user actions. One last design feature to point out is the technical reasons for supplying both light and dark themes for the app. Since this application is targeting energy consumption it only makes sense to consider the effect of the apps background color or image on the phones energy use. When displaying darker colors like black the OLED display may use less power, extending battery life. Other Considerations For now I left out options of wind and solar powered energy options because they are not available to everyone. Renewable energy sources and new technologies associated with them are definitely ideas to keep in mind for a next iteration. Another idea to explore for such an application would be to include a savings model similar to mint.com. In addition to general energy-saving recommendations the application could recommend customized ways to save based on your current utility providers and available options in your area. If your television or refrigerator is guilty of sucking a lot of energy then you may see recommendations for energy star products that could save you even more money! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to make software development decisions based on facts

    - by Laila
    We love to hear stories about the many and varied ways our customers use the tools that we develop, but in our earnest search for stories and feedback, we'd rather forgotten that some of our keenest users are fellow RedGaters, in the same building. It was almost by chance that we discovered how the SQL Source Control team were using SmartAssembly. As it happens, there is a separate account (here on Simple-Talk) of how SmartAssembly was used to support the Early Access program; by providing answers to specific questions about how the SQL Source Control product was used. But what really got us all grinning was how valuable the SQL Source Control team found the reports that SmartAssembly was quickly and painlessly providing. So gather round, my friends, and I'll tell you the Tale Of The Framework Upgrade . <strange mirage effect to denote a flashback. A subtle background string of music starts playing in minor key> Kevin and his team were undecided. They weren't sure whether they could move their software product from .NET 2 to .NET 3.5 , let alone to .NET 4. You see, they were faced with having to guess what version of .NET was already installed on the average user's machine, which I'm sure you'll agree is no easy task. Upgrading their code to .NET 3.5 might put a barrier to people trying the tool, which was the last thing Kevin wanted: "what if our users have to download X, Y, and Z before being able to open the application?" he asked. That fear of users having to do half an hour of downloads (.followed by at least ten minutes of installation. followed by a five minute restart) meant that Kevin's team couldn't take advantage of WCF (Windows Communication Foundation). This made them sad, because WCF would have allowed them to write their code in a much simpler way, and in hours instead of days (as was the case with .NET 2). Oh sure, they had a gut feeling that this probably wasn't the case, 3.5 had been out for so many years, but they weren't sure. <background music switches to major key> SmartAssembly Feature Usage Reporting gave Kevin and his team exactly what they needed: hard data on their users' systems, both hardware and software. I was there, I saw it happen, and that's not the sort of thing a woman quickly forgets. I'll always remember his last words (before he went to lunch): "You get lots of free information by just checking a box in SmartAssembly" is what he said. For example, they could see how many CPU cores their customers were using, and found out that they should be making use of parallelism to take advantage of available cores. But crucially, (and this is the moral of my tale, dear reader), Kevin saw that 99% of SQL Source Control's users were on .NET 3.5 or above.   So he knew that they could make the switch and that is was safe to do so. With this reassurance, they could use WCF to not only make development easier, but to also give them a really nice way to do inter-process communication between the Source Control and the SQL Compare products. To have done that on .NET 2.0 was certainly possible <knowing chuckle>, but Microsoft have made it a lot easier with WCF. <strange mirage effect to denote end of flashback> So you see, with Feature Usage Reporting, they finally got the hard evidence they needed to safely make the switch to .NET 3.5, knowing it would not inconvenience their users. And that, my friends, is just the sort of thing we like to hear.

    Read the article

  • Useful SVN and Git commands – Cheatsheet

    - by Madhan ayyasamy
    The following snippets will helpful one who user version control systems like Git and SVN.svn checkout/co checkout-url – used to pull an SVN tree from the server.svn update/up – Used to update the local copy with the changes made in the repository.svn commit/ci – m “message” filename – Used to commit the changes in a file to repository with a message.svn diff filename – shows up the differences between your current file and what’s there now in the repository.svn revert filename – To overwrite local file with the one in the repository.svn add filename – For adding a file into repository, you should commit your changes then only it will reflect in repository.svn delete filename – For deleting a file from repository, you should commit your changes then only it will reflect in repository.svn move source destination – moves a file from one directory to another or renames a file. It will effect your local copy immediately as well as on the repository after committing.git config – Sets configuration values for your user name, email, file formats and more.git init – Initializes a git repository – creates the initial ‘.git’ directory in a new or in an existing project.git clone – Makes a Git repository copy from a remote source. Also adds the original location as a remote so you can fetch from it again and push to it if you have permissions.git add – Adds files changes in your working directory to your index.git rm – Removes files from your index and your working directory so they will not be tracked.git commit – Takes all of the changes written in the index, creates a new commit object pointing to it and sets the branch to point to that new commit.git status – Shows you the status of files in the index versus the working directory.git branch – Lists existing branches, including remote branches if ‘-a’ is provided. Creates a new branch if a branch name is provided.git checkout – Checks out a different branch – switches branches by updating the index, working tree, and HEAD to reflect the chosen branch.git merge – Merges one or more branches into your current branch and automatically creates a new commit if there are no conflicts.git reset – Resets your index and working directory to the state of your last commit.git tag – Tags a specific commit with a simple, human readable handle that never moves.git pull – Fetches the files from the remote repository and merges it with your local one.git push – Pushes all the modified local objects to the remote repository and advances its branches.git remote – Shows all the remote versions of your repository.git log – Shows a listing of commits on a branch including the corresponding details.git show – Shows information about a git object.git diff – Generates patch files or statistics of differences between paths or files in your git repository, or your index or your working directory.gitk – Graphical Tcl/Tk based interface to a local Git repository.

    Read the article

  • Solaris 11 Update 1 - Link Aggregation

    - by Wesley Faria
    Solaris 11.1 No início desse mês em um evento mundial da Oracle chamado Oracle Open World foi lançada a nova release do Solaris 11. Ela chega cheia de novidades, são aproximadamente 300 novas funcionalidade em rede, segurança, administração e outros. Hoje vou falar de uma funcionalidade de rede muito interessante que é o Link Aggregation. O Solaris já suporta Link Aggregation desde Solaris 10 Update 1 porem no Solaris 11 Update 1 tivemos incrementos significantes. O Link Aggregation como o próprio nome diz, é a agregação de mais de uma inteface física de rede em uma interface lógica .Veja agumas funcionalidade do Link Aggregation: · Aumentar a largura da banda; · Imcrementar a segurança fazendo Failover e Failback; · Melhora a administração da rede; O Solaris 11.1 suporta 2(dois) tipos de Link Aggregation o Trunk aggregation e o Datalink Multipathing aggregation, ambos trabalham fazendo com que o pacote de rede seja distribuído entre as intefaces da agregação garantindo melhor utilização da rede.vamos ver um pouco melhor cada um deles. Trunk Aggregation O Trunk Aggregation tem como objetivo aumentar a largura de banda, seja para aplicações que possue um tráfego de rede alto seja para consolidação. Por exemplo temos um servidor que foi adquirido para comportar várias máquinas virtuais onde cada uma delas tem uma demanda e esse servidor possue 2(duas) placas de rede. Podemos então criar uma agregação entre essas 2(duas) placas de forma que o Solaris 11.1 vai enchergar as 2(duas) placas como se fosse 1(uma) fazendo com que a largura de banda duplique, veja na figura abaixo: A figura mostra uma agregação com 2(duas) placas físicas NIC 1 e NIC 2 conectadas no mesmo switch e 2(duas) interfaces virtuais VNIC A e VNIC B. Porem para que isso funcione temos que ter um switch com suporte a LACP ( Link Aggregation Control Protocol ). A função do LACP é fazer a aggregação na camada do switch pois se isso não for feito o pacote que sairá do servidor não poderá ser montado quando chegar no switch. Uma outra forma de configuração do Trunk Aggregation é o ponto-a-ponto onde ao invéz de se usar um switch, os 2 servidores são conectados diretamente. Nesse caso a agregação de um servidor irá falar diretamente com a agregação do outro garantindo uma proteção contra falhas e tambem uma largura de banda maior. Vejamos como configurar o Trunk Aggregation: 1 – Verificando quais intefaces disponíveis # dladm show-link 2 – Verificando interfaces # ipadm show-if 3 – Apagando o endereçamento das interfaces existentes # ipadm delete-ip <interface> 4 – Criando o Trunk aggregation # dladm create-aggr -L active -l <interface> -l <interface> aggr0 5 – Listando a agregação criada # dladm show-aggr Data Link Multipath Aggregation Como vimos anteriormente o Trunk aggregation é implementado apenas 1(um) switch que possua suporte a LACP portanto, temos um ponto único de falha que é o switch. Para solucionar esse problema no Solaris 10 utilizavamos o IPMP ( IP Multipathing ) que é a combinação de 2(duas) agregações em um mesmo link ou seja, outro camada de virtualização. Agora com o Solaris 11 Update 1 isso não é mais necessário, voce pode ter uma agregação de 2(duas) interfaces físicas e cada uma conectada a 1(um) swtich diferente, veja a figura abaixo: Temos aqui uma agregação chamada aggr contendo 4(quatro) interfaces físicas sendo que as interfaces NIC 1 e NIC 2 estão conectadas em um Switch e as intefaces NIC 3 e NIC 4 estão conectadas em outro Swicth. Além disso foram criadas mais 4(quatro) interfaces virtuais vnic A, vnic B, vnic C e vnic D que podem ser destinadas a diferentes aplicações/zones. Com isso garantimos alta disponibilidade em todas a camadas pois podemos ter falhas tanto em switches, links como em interfaces de rede físicas. Para configurar siga os mesmo passos da configuração do Trunk Aggregation até o passo 3 depois faça o seguinte: 4 – Criando o Trunk aggregation # dladm create-aggr -m haonly -l <interface> -l <interface> aggr0 5 – Listando a agregação criada # dladm show-aggr Depois de configurado seja no modo Trunk aggregation ou no modo Data Link Multipathing aggregation pode ser feito a troca de um modo para o outro, pode adcionar e remover interfaces físicas ou vituais. Bem pessoal, era isso que eu tinha para mostar sobre a nova funcionalidade do Link Aggregation do Solaris 11 Update 1 espero que tenham gostado, até uma próxima novidade.

    Read the article

  • MacGyver Moments

    - by Geoff N. Hiten
    Denny Cherry tagged me to write about my best MacGyver Moment.  Usually I ignore blogosphere fluff and just use this space to write what I think is important.  However, #MVP10 just ended and I have a stronger sense of community.  Besides, where else would I mention my second best Macgyver moment was making a BIOS jumper out of a soda can.  Aluminum is conductive and I didn't have any real jumpers lying around. My best moment is probably my entire home computer network.  Every system but one is hand-built, usually cobbled together out of spare parts and 'adapted' from its original purpose. My Primary Domain Controller is a Dell 2300.   The Service Tag indicates it was shipped to the original owner in 1999.  Box has a PERC/1 RAID controller.  I acquired this from a previous employer for $50.  It runs Windows Server 2003 Enterprise Edition.  Does DNS, DHCP, and RADIUS services as a bonus.  RADIUS authentication is used for VPN and Wireless access.  It is nice to sign in once and be done with it. The Secondary Domain Controller is an old desktop.  Dual P-III 933 with some extra drives. My VPN box is a P-II 250 with 384MB of RAM and a 21 GB hard drive.  I did a P-to-V to my Hyper-V box a year or so ago and retired the hardware again.  Dynamic DNS lets me connect no matter how often Comcast shuffles my IP. The Hyper-V box is a desktop system with 8GB RAM and an AMD Athlon 5000+ processor.  Cost me less than $500 to put together nearly two years ago.  I reasoned that if Vista and Windows 2008 were the same code then Vista 64-bit certified meant the drivers for Vista would load into Windows 2008.  Turns out I was right. Later I added three 1TB drives but wasn't too happy with how that turned out.  I recovered two of the drives yesterday and am building an iSCSI storage unit. (Much thanks to Starwind.  Great product).  I am using an old AMD 1.1GhZ box with 1.5 GB RAM (cobbled together from three old PCs) as my storave server.  The Hyper-V box is slated for an OS rebuild to 2008 R2 once I get the storage system worked out.  maybe in a week or two. A couple of DLink Gigabit switches ties everything together. Add in the Vonage box, the three PCs, the Wireless-N Access Point, the two notebooks and the XBox and you have gone from MacGyver to darn near Rube Goldberg. The only thing I really spend money on is power supplies and fans.  I buy top-of-the-line for both. I even pull and crimp my own cables. Oh, and if my kids hose up a PC, I have all of their data on a server elsewhere.  Every PC and laptop is pretty much interchangable for email and basic workstation tasks.  That helps a lot too. Of course I will tag SQLVariant.

    Read the article

  • 5 Ways to Determine Mobile Location

    - by David Dorf
    In my previous post, I mentioned the importance of determining the location of a consumer using their mobile phone.  Retailers can track anonymous mobile phones to determine traffic patterns both inside and outside their stores.  And with consumers' permission, retailers can send location-aware offers to mobile phones; for example, a coupon for cereal as you walk down that aisle.  When paying with Square, your location is matched with the transaction.  So there are lots of reasons for retailers to want to know the location of their customers.  But how is it done? I thought I'd dive a little deeper on that topic and consider the approaches to determining location. 1. Tower Triangulation By comparing the relative signal strength from multiple antenna towers, a general location of a phone can be roughly determined to an accuracy of 200-1000 meters.  The more towers involved, the more accurate the location. 2. GPS Using Global Positioning Satellites is more accurate than using cell towers, but it takes longer to find the satellites, it uses more battery, and it won't well indoors.  For geo-fencing applications, like those provided by Placecast and Digby, cell towers are often used to determine if the consumer is nearing a "fence" then switches to GPS to determine the actual crossing of the fence. 3. WiFi Triangulation WiFi triangulation is usually more accurate than using towers just because there are so many more WiFi access points (i.e. radios in routers) around. The position of each WiFi AP needs to be recorded in a database and used in the calculations, which is what Skyhook has been doing since 2008.  Another advantage to this method is that works well indoors, although it usually requires additional WiFi beacons to get the accuracy down to 5-10 meters.  Companies like ZuluTime, Aisle411, and PointInside have been perfecting this approach for retailers like Meijer, Walgreens, and HomeDepot. Keep in mind that a mobile phone doesn't have to connect to the WiFi network in order for it to be located.  The WiFi radio in the phone only needs to be on.  Even when not connected, WiFi radios talk to each other to prepare for a possible connection. 4. Hybrid Approaches Naturally the most accurate approach is to combine the approaches described above.  The more available data points, the greater the accuracy.  Companies like ShopKick like to add in acoustic triangulation using the phone's microphone, and NearBuy can use video analytics to increase accuracy. 5. Magnetic Fields The latest approach, and this one is really new, takes a page from the animal kingdom.  As you've probably learned from guys like Marlin Perkins, some animals use the Earth's magnetic fields to navigate.  By recording magnetic variations within a store, then matching those readings with ones from a consumer's phone, location can be accurately determined.  At least that's the approach IndoorAtlas is taking, and the science seems to bear out.  It works well indoors, and doesn't require retailers to purchase any additional hardware.  Keep an eye on this one.

    Read the article

  • LexisNexis and Oracle Join Forces to Prevent Fraud and Identity Abuse

    - by Tanu Sood
    Author: Mark Karlstrand About the Writer:Mark Karlstrand is a Senior Product Manager at Oracle focused on innovative security for enterprise web and mobile applications. Over the last sixteen years Mark has served as director in a number of tech startups before joining Oracle in 2007. Working with a team of talented architects and engineers Mark developed Oracle Adaptive Access Manager, a best of breed access security solution.The world’s top enterprise software company and the world leader in data driven solutions have teamed up to provide a new integrated security solution to prevent fraud and misuse of identities. LexisNexis Risk Solutions, a Gold level member of Oracle PartnerNetwork (OPN), today announced it has achieved Oracle Validated Integration of its Instant Authenticate product with Oracle Identity Management.Oracle provides the most complete Identity and Access Management platform. The only identity management provider to offer advanced capabilities including device fingerprinting, location intelligence, real-time risk analysis, context-aware authentication and authorization makes the Oracle offering unique in the industry. LexisNexis Risk Solutions provides the industry leading Instant Authenticate dynamic knowledge based authentication (KBA) service which offers customers a secure and cost effective means to authenticate new user or prove authentication for password resets, lockouts and such scenarios. Oracle and LexisNexis now offer an integrated solution that combines the power of the most advanced identity management platform and superior data driven user authentication to stop identity fraud in its tracks and, in turn, offer significant operational cost savings. The solution offers the ability to challenge users with dynamic knowledge based authentication based on the risk of an access request or transaction thereby offering an additional level to other authentication methods such as static challenge questions or one-time password when needed. For example, with Oracle Identity Management self-service, the forgotten password reset workflow utilizes advanced capabilities including device fingerprinting, location intelligence, risk analysis and one-time password (OTP) via short message service (SMS) to secure this sensitive flow. Even when a user has lost or misplaced his/her mobile phone and, therefore, cannot receive the SMS, the new integrated solution eliminates the need to contact the help desk. The Oracle Identity Management platform dynamically switches to use the LexisNexis Instant Authenticate service for authentication if the user is not able to authenticate via OTP. The advanced Oracle and LexisNexis integrated solution, thus, both improves user experience and saves money by avoiding unnecessary help desk calls. Oracle Identity and Access Management secures applications, Juniper SSL VPN and other web resources with a thoroughly modern layered and context-aware platform. Users don't gain access just because they happen to have a valid username and password. An enterprise utilizing the Oracle solution has the ability to predicate access based on the specific context of the current situation. The device, location, temporal data, and any number of other attributes are evaluated in real-time to determine the specific risk at that moment. If the risk is elevated a user can be challenged for additional authentication, refused access or allowed access with limited privileges. The LexisNexis Instant Authenticate dynamic KBA service plugs into the Oracle platform to provide an additional layer of security by validating a user's identity in high risk access or transactions. The large and varied pool of data the LexisNexis solution utilizes to quiz a user makes this challenge mechanism even more robust. This strong combination of Oracle and LexisNexis user authentication capabilities greatly mitigates the risk of exposing sensitive applications and services on the Internet which helps an enterprise grow their business with confidence.Resources:Press release: LexisNexis® Achieves Oracle Validated Integration with Oracle Identity Management Oracle Access Management (HTML)Oracle Adaptive Access Manager (pdf)

    Read the article

  • With 2 superposed cameras at different depths and switching their culling masks between layers to implement object-selective antialising:

    - by user36845
    We superposed two cameras, one of which uses AA as post-processing effect (AA filtering is cancelled). The camera with the AA effect has depth 0 and the camera with no effect has depth 1 as can be seen in the 5th and 6th Picture. The objects seen on the left are in layer 1 and the ones on the right are in layer 2. We then wrote a script that switches the culling masks of the cameras between the two layers at the push of buttons 1 and 2 respectively, and accomplishes object-selective antialiasing as seen in the first the three pictures. (The way two cameras separately switch culling masks between layers is illustrated in pictures 7,8 & 9.) HOWEVER, after making the environment 3D (see pictures 1-4), by parenting the 2 cameras under First-Person Controller, we started moving around in the environment and stumbled upon a big issue: When we look at the objects from such an angle as in the 4th Picture and we want to apply antialiasing to the first object (object on the left) which stands closer to our cameras now, the culling mask of 1st camera which is at depth 0, has to be switched to that object’s layer while the second object has to be in the culling mask of the 2nd camera at depth 1. And since the two image outputs of two superposed cameras are laid on top of one another; we obtain the erroneous/unrealistic result of the object farther in the back appearing closer to the camera than the front object (see 4th Picture). We already tried switching depths of cameras so that the 1st camera –with AA- now has depth 1 and the second has depth 0; BUT the camera with the AA effect Works in such a way that it applies the AA effect to its full view. So; the camera with the AA effect always has to remain at the lowest depth and the layer of the object to be antialiased has to be then assigned to the culling mask of the AA camera; otherwise all objects in the AA camera’s view (the two cubes in our case) become antialised, which we don’t want. So; how can we resolve this? The pictures are below and in the comments since each post can have 2 pics: Pic 1. No button is pushed: Both objects seem aliased. Pic 2. Button 1 is pushed: Left (1st) object is antialiased. 2nd object remains aliased. Pic 3. Button 2 is pushed: Right (2nd) object is antialiased. 1st object remains aliased. Pic 4. The problematic result in 3D, when using two superposed cameras with different depths Pic 5. Camera 1’s properties can be seen: using AA post-processing and its depth is 0 Pic 6. Camera 2’s properties can be seen: NOT using AA post-processing and its depth is 1 Pic 7. When no button is pushed, both objects are in the culling mask of Camera 2 and are aliased Pic 8. When pushed 1, camera 1 (bottom) shows the 1st object and camera 2 (top) shows the 2nd Pic 9. When pushed 2, camera 1 (bottom) shows the 2nd object and camera 2 (top) shows the 1st

    Read the article

  • surviveFocusChange=true

    - by Geertjan
    Here's a very cool thing that I keep forgetting about but that Jesse reminded me of in the recent blog entries on Undo/Redo: "surviveFocusChange=true". Look at the screenshot below. You see two windows with a toolbar button. The toolbar button is enabled whenever an object named "Bla" is in the Lookup. The "Demo" window has a "Bla" object in its Lookup and hence the toolbar button is enabled when the focus is in the "Demo" window, as shown below: Now the focus is in the "Output" window, which does not have a "Bla" object in its Lookup and hence the button is disabled: However, there are scenarios where you might like the button to remain enabled even when the focus changes. (One such scenario is the Undo/Redo scenario in this blog a few days ago, i.e., even when the Properties window has the focus the Undo/Redo buttons should be enabled.) Here you can see that the button is enabled even though the focus has switched to the "Output" window: How to achieve this? Well, you need to register your Action to have "surviveFocusChange" set to "true". It is, by default, set to "false": import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionReferences; import org.openide.awt.ActionRegistration; import org.openide.util.NbBundle.Messages; @ActionID(category = "File", id = "org.mymodule.BlaAction") @ActionRegistration(surviveFocusChange=true, iconBase = "org/mymodule/Datasource.gif", displayName = "#CTL_BlaAction") @ActionReferences({     @ActionReference(path = "Toolbars/Bla", position = 0) }) @Messages("CTL_BlaAction=Bla") public final class BlaAction implements ActionListener {     private final Bla context;     public BlaAction(Bla context) {         this.context = context;     }     @Override     public void actionPerformed(ActionEvent ev) {         // TODO use context     } } That's all. Now folders and files will be created in the NetBeans Platform filesystem from the annotations above when the module is compiled such that the NetBeans Platform will automatically keep the button enabled even when the user switches focus to a window that does not contain a "Bla" object in its Lookup. Hence, the same "Bla" object will remain available when switching from one window to another, until a new "Bla" object will be made available in the Lookup.

    Read the article

  • BizTalk: Suspend shape and Convoy

    - by Leonid Ganeline
    Part 1: BizTalk: Instance Subscription and Convoys: Details This is a Part 2. I am discussing the Suspend shape together with Convoys and going to show that using them together is undesirable. In previous article we investigated the Instance Subscriptions and how they could create situation with dangerous zones in processing.  Let' start with Suspend shape. [See the BizTalk Help] "You can use the Suspend shape to make an orchestration instance stop running until an administrator explicitly intervenes, perhaps to reflect an error condition that requires attention beyond the scope of the orchestration. All of the state information for the orchestration instance is saved, and will be reinstated when the administrator resumes the orchestration instance. When an orchestration instance is suspended, an error is raised. You can specify a message string to accompany the error to help the administrator diagnose the situation."   On the Suspend shape the orchestration is stopped in the Suspended (Resumable) state. Next we have two choices, one is to resume and the second is to terminate the orchestration. Is the orchestration is stopped or unenlisted? You don't find a note about it anywhere. The fact is the Orchestration is stopped and still enlisted. It is very important. So again, the suspended orchestration can be resumed or terminated. The moment when the operator or the operation script resumes or terminates can be far away. It is also important too. Let's go back to the case from previous article. Make sure you notice the convoy and the dangerous zone after the last Receive shape.     Now we have a Suspend shape inside the orchestration. The first orchestration instance is suspended. Next messages start new orchestration instance and have been consumed by this orchestration, right? Wrong! The orchestration is stopped on the Suspend shape but still enlisted. Now the dangerous zone, the "zombie zone" is expanded to the interval between the last receive and the moment of termination or end of the orchestration. The new orchestration instance for this convoy will not start till this moment. How fast operator finds out this suspended orchestration? Maybe hours or days. All this time orchestration is still enlisted and gathering the convoy messages. We can resume the orchestration but we cannot resume these messages together with orchestration. Seems the name Suspended of the orchestration is misleading. The orchestration can be in the Started (and Enlisted)/Stopped (and Enlisted)/Unenlisted state. The Suspend shape switches orchestration exactly to the Stopped state. The Stop name would describe the shape clearly and unambiguously and the Stopped state would describe the orchestration. Imagine we can change the BizTalk. The Orchestration editor can search these situations and returns the compile error. In similar case the Orchestration Editor forces us to use only ordered delivery port with convoys. The run-time core can force the orchestration with convoy be suspended in Unresumable state, that means the run-time unenlists the orchestration instance subscriptions. The Suspend shape name should be changed. The "Suspend" name is misleading. The "Stop" name is clear and unambiguous. The same for the orchestration state, it should be “Stopped” not “Suspended (Resumable)”.   Conclusion:  It is not recommended using a Suspend shape together with the convoy orchestrations.

    Read the article

  • Ti Launchpad

    - by raysmithequip
    Just thought I would get a couple of notes up here for reference to anyone that is interested...it is now Feb 2011 and I have not been posting here enough to remember this blog. Back in Nov 2010 I ordered the Ti launchpad msp430, it is a little target board kit replete with a mini USB cable, two very inexpensive programmable mcu's and a couple of pin headers with a couple of led's on board, a spi connector some on board jumpers and two programmable micro switches....all for less than $5.00...INCLUDING SHIPPING!!....not bad when the ardruino's are running around 20.00 for the target board, atmega328 and cable off of eBay...I wont even mention the microchip pic right now.  Naw, for $5.00 the Ti launchpad kit is about the cheapest fun around...if-uns your a geek that is... Well, the launchpad was backordered for almost two months, came like Xmas eve in fact...I had almost forgotten it!! And really, it was way late and not my idea of an Xmas present for myself.  That would of been the web expressions 4 I bought a few weeks back.  With all the holidays, I did not even look at it till last week, in fact I passed the wrapped board around at my local ham club meeting during points of personal privilege....some oh's and ahhs but mostly duhs...I actually ordered it to avoid downloading the huge code compressor studio 4 (CCS) that was supposed to be included on the cd.  No cd.  I had already downloaded IAR  another programming IDE for these little micro bugs. In my spare time I toyed with IAR and the launchpad board but after about two days of playing delete the driver with windows I decided to just download CCS 4, the code limited version, and give that a shot......CCS 4, is a good rewrite from the earlier versions, it is based on Eclipse as an IDE and includes the drivers for the msp430 target board I received in the kit.  Once installed I quickly configured the debugger for the target chip which was already plugged into the dip socket at the factory, msp430G2131 from he drop down list and clicked ok...I was in!! The CCS4 is full of bells and whistles compared to the IAR, which I would of preferred for the simplicity.  But the code compressor studio really does have it all!!..the code limited version is free, and of all things will give you java script editor box.  The whole layout in debugger mode reminds me of any modern programmer IDE...I mean sure give me Tex anytime but you simply must admire all the boxes and options included in the GUI.  It was a simple matter to check the assembly code in the flash and ram memory that came preloaded for the launchpad kit.  Assembly.  I am right now looking for my old assembly textbooks...sure I remember how to use mov and add etc but a couple of the commands are a little more than vague anymore.  Still, these little mcu's are about 50 cents each and might just work in a couple of projects I have lined up for the near future.  I may document the code here.  Luckily, I plan to write the code in c++ for the main project but if it has to be assembly, no prob.  For reference, the program that came already on the 2131 in the kit was a temperature indicator that alternately flashed red and green leds and changed the intensity of either depending on whether the temp was rising or falling...neat.  Neat enough that it might be worthwhile banging out a little GUI in windows 7 to test the new user device system calls, maybe put a temp gauge widget up on the desktop...just to keep from getting bored.  If you see some assembly code on this blog, you know I was doing something with one of the many mcu's out there.....thats all for now, more to follow...a bit later, of course.

    Read the article

  • Hardware wireless switch has no effect after suspend and 13.10 upgrade

    - by blaineh
    This seems to be a fairly chronic problem, as shown by the following questions: How do I fix a "Wireless is disabled by hardware switch" error? Wireless disabled by hardware switch "Wireless disabled by hardware switch" after suspend and other hardware buttons ineffective - how can I solve this? but no good solutions have been found! Wireless works fine after a reboot, but after a suspend the hardware switch (for my laptop this is f12) has no effect on the wireless, it is just permanently off, and shows that it is with a red LED. All My rfkill list all reads: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes Any combination with rfkill <un>block wifi doesn't work, although one time first blocking then unblocking actually turned it on again. sudo lshw -C network reads: *-network DISABLED description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 78:e4:00:65:2e:3f width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.11.0-12-generic firmware=N/A ip=155.99.215.79 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:90100000-9010ffff *-network DISABLED description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 02 serial: c8:0a:a9:89:b4:30 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:42 ioport:2000(size=256) memory:90010000-90010fff memory:90000000-9000ffff memory:90020000-9002ffff Also, adding a /etc/pm/sleep.d/brcm.sh file as recommended here simply prevents the laptop from suspending at all, which of course is no good. This question has an answer urging to install the original driver, but it wasn't an "accepted answer" so I'd rather not take a chance on it. Also I'll admit I'm a bit lost on that and would like help doing so with the specific information I've given. xev shows that no internal event is triggered for my wireless switch (f12), but other function keys also acting as hardware switches work fine. I would be happy to provide more information, so long as you're willing to help me find it for you! This is a very annoying bug. I have a Compaq Presario CQ62. Edit. I just tried to reload bios defaults (or something) as shown by this video. Didn't work. Edit. I tried the contents of this answer, and it didn't work. Edit. I made a pastebin of dmesg. I couldn't even begin to understand the contents. Edit. Output of lspci | grep Network: 02:00.0 Network controller: Qualcomm Atheros AR9285 Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • Managing common code on Windows 7 (.NET) and Windows 8 (WinRT)

    - by ryanabr
    Recent announcements regarding Windows Phone 8 and the fact that it will have the WinRT behind it might make some of this less painful but I  discovered the "XmlDocument" object is in a new location in WinRT and is almost the same as it's brother in .NET System.Xml.XmlDocument (.NET) Windows.Data.Xml.Dom.XmlDocument (WinRT) The problem I am trying to solve is how to work with both types in the code that performs the same task on both Windows Phone 7 and Windows 8 platforms. The first thing I did was define my own XmlNode and XmlNodeList classes that wrap the actual Microsoft objects so that by using the "#if" compiler directive either work with the WinRT version of the type, or the .NET version from the calling code easily. public class XmlNode     { #if WIN8         public Windows.Data.Xml.Dom.IXmlNode Node { get; set; }         public XmlNode(Windows.Data.Xml.Dom.IXmlNode xmlNode)         {             Node = xmlNode;         } #endif #if !WIN8 public System.Xml.XmlNode Node { get; set ; } public XmlNode(System.Xml.XmlNode xmlNode)         {             Node = xmlNode;         } #endif     } public class XmlNodeList     { #if WIN8         public Windows.Data.Xml.Dom.XmlNodeList List { get; set; }         public int Count {get {return (int)List.Count;}}         public XmlNodeList(Windows.Data.Xml.Dom.XmlNodeList list)         {             List = list;         } #endif #if !WIN8 public System.Xml.XmlNodeList List { get; set ; } public int Count { get { return List.Count;}} public XmlNodeList(System.Xml.XmlNodeList list)         {             List = list;        } #endif     } From there I can then use my XmlNode and XmlNodeList in the calling code with out having to clutter the code with all of the additional #if switches. The challenge after this was the code that worked directly with the XMLDocument object needed to be seperate on both platforms since the method for populating the XmlDocument object is completly different on both platforms. To solve this issue. I made partial classes, one partial class for .NET and one for WinRT. Both projects have Links to the Partial Class that contains the code that is the same for the majority of the class, and the partial class contains the code that is unique to the version of the XmlDocument. The files with the little arrow in the lower left corner denotes 'linked files' and are shared in multiple projects but only exist in one location in source control. You can see that the _Win7 partial class is included directly in the project since it include code that is only for the .NET platform, where as it's cousin the _Win8 (not pictured above) has all of the code specific to the _Win8 platform. In the _Win7 partial class is this code: public partial class WUndergroundViewModel     { public static WUndergroundData GetWeatherData( double lat, double lng)         { WUndergroundData data = new WUndergroundData();             System.Net. WebClient c = new System.Net. WebClient(); string req = "http://api.wunderground.com/api/xxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); XmlDocument doc = new XmlDocument();             doc.Load(c.OpenRead(req)); foreach (XmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.Node.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break ; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break ; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation" )),data); break ;                 }             } return data;         }     } in _win8 partial class is this code: public partial class WUndergroundViewModel     { public async static Task< WUndergroundData > GetWeatherData(double lat, double lng)         { WUndergroundData data = new WUndergroundData (); HttpClient c = new HttpClient (); string req = "http://api.wunderground.com/api/xxxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); HttpResponseMessage msg = await c.GetAsync(req); string stream = await msg.Content.ReadAsStringAsync(); XmlDocument doc = new XmlDocument ();             doc.LoadXml(stream, null); foreach ( IXmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation")), data); break;                 }             } return data;         }     } Summary: This method allows me to have common 'business' code for both platforms that is pretty clean, and I manage the technology differences separately. Thank you tostringtheory for your suggestion, I was considering that approach.

    Read the article

  • Getting data from UITableView

    - by Tejaswi Yerukalapudi
    Hi, I have a few custom UITableViewCells - http://img11.imageshack.us/i/customfacilitiescell.png/ which are added to this UIViewController - http://img189.imageshack.us/i/facilitycontroller.png/ Now, on clicking a button in the controller, I'd like to get the on/off status of all the UISwitches in the controller. Thanks, Teja Edit: I've made a few edits, but I still can't figure out how to do this. My program structure currently - A CustomCell.xib that looks like this - http://img11.imageshack.us/i/customfacilitiescell.png/ A CustomCellController that a subclass of UITableViewCell that has the IBOutlets for the labels and switches from above. Now I have a UIViewController<UITableViewDataSource, UITableViewDelegate> (Say, Screen1Controller) which looks like - http://img189.imageshack.us/i/facilitycontroller.png/ The tableviewcell is being created like this - - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *Id= @"CustomFacilitiesCell"; CustomFacilitiesCellController *cell = (CustomFacilitiesCellController *)[tableView dequeueReusableCellWithIdentifier:Id]; if(cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"CustomFacilitiesCell" owner:self options:nil]; for (id oneObject in nib) { if([oneObject isKindOfClass:[CustomFacilitiesCellController class]]) cell = (CustomFacilitiesCellController *) oneObject; } } NSUInteger row = [indexPath row]; CustomFacilitiesCellController *rowData = (CustomFacilitiesCellController *)[self.facilities objectAtIndex:row]; cell.facname.text = rowData.facname.text; cell.FacID.text = rowData.FacID.text; cell.facSwitch = [(CustomFacilitiesCellController *)rowData facSwitch]; UISwitch *temp = cell.facSwitch; [(UISwitch *)[cell facSwitch] addTarget:self action:@selector(facSwitchOptionChanged:) forControlEvents:UIControlEventValueChanged]; cell.facSwitch.on = NO; //cell.facSwitch.enabled = FALSE; cell.accessoryType = UITableViewCellAccessoryNone; return cell; } - (IBAction) facSwitchOptionChanged:(id) sender { int i=0; } In particular, my problem is that the facSwitchOptionChanged() isn't getting called. Thanks again for the help, Teja.

    Read the article

  • NSDate dateFromString, how to parse 'around' UTC, GMT and User locale?

    - by RickiG
    Hi I parse some values from an xml file. There is a @"25-12-2010'T'23:40:00"string with the time and date and there is a string with the GMT offset like this @"+0200". So the above time is the 25. of December 23:40:00 in timeZone +0200 GMT. (or 21:40 UTC) I have lots of these dates with different GMT offsets. I have to display these dates as they are, i.e. They must not be changed to fit the locale of the user. So if time 1: is 22:45 +0500 then that is what I must show the user, even if the user is in a different timezone. I have all sorts of trouble with displaying, calculating and parsing these strings. If I use a dateFormatter and dateFromString the user specific GMT info will be included in the resulting NSDate meaning the above will be saved as 23:40:00 +0100 GMT because that is my phones setting and maybe 23:40:00 -0400 on a user from new New York's phone. When I subsequently do subtraction, addition and comparisons between these dates I have to keep the GMT offset around and everything gets worse if the phone switches locale settings, from when the date was parsed to when the date is displayed... Is there a way for me to extract this date from the string as UTC, then save it as an interval instead of an actual (timezone dependent) date. I know that is how dates are always saved internally. But I can't figure out how to do it with the separate GMT string and taking into account the users locale. Cheers

    Read the article

  • OrientationEventListener not working properly

    - by nixau
    Hi all, I need to handle orientation changes in my Android application. For this purpose I decided to use OrientationEventListener convenience class. But his callback method is given somewhat strange behavior. My application starts in the portrait mode and then eventually switches to the lanscape one. I have some custom code executing in the callback onOrientationChanged method that provides some additional UI handling logic - it has a few calls to findViewById. What is strange is that when switching back from landscape to portrait mode onOrientationChanged callback is called twice, and what's even worse - the second call is dealing with bad Context - findViewById method starts returning null. These calls are made right from the MainThread @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); listener = new OrientationListener(); } @Override protected void onResume() { super.onResume(); // enabling listening listener.enable(); } @Override protected void onPause() { super.onPause(); // disabling listening listener.disable(); } I've replicated the same behavior with a dummy Activity without any logic except for one that deals with orientation hadling. I initiate orientation switch from the Android 2.2 emulator by pressing Ctrl+F11 What could be wrong? Upd: Inner class that implements OrientationEventListener private class OrientationListener extends OrientationEventListener { public OrientationL() { super(getBaseContext()); } @Override public void onOrientationChanged(int orientation) { toString(); } } }

    Read the article

  • Embed Youtube in UIWebView behind transparent img. Wmode transparent and z-index doesn't work

    - by Allisone
    I'm using this code: - (void)embedYouTube:(NSString *)urlString frame:(CGRect)frame { NSString *embedHTML = @"\ <html><head>\ <style type=\"text/css\">\ body {\ background-color: black;\ }\ #container{\ position: relative;\ z-index:1;\ }\ #video,#videoc{\ position:absolute;\ z-index: 1;\ border: none;\ }\ #tv{\ background: transparent url(tv.png) no-repeat;\ width: 320px;\ height: 205px;\ position: absolute;\ top: 0;\ z-index: 999;\ }\ </style>\ </head><body style=\"margin:0\">\ <div id=\"tv\"></div>\ <object id=\"videoc\" width=\"240\" height=\"160\">\ <param name=\"movie\" value=\"%@\"></param>\ <param name=\"wmode\" value=\"transparent\"></param>\ <embed wmode=\"transparent\" id=\"video\" src=\"%@\" type=\"application/x-shockwave-flash\" \ width=\"240\" height=\"160\"></embed>\ </object>\ </body></html>"; NSString *path = [[NSBundle mainBundle] bundlePath]; NSURL *baseURL = [NSURL fileURLWithPath:path]; NSString *html = [NSString stringWithFormat:embedHTML, urlString,urlString]; UIWebView *videoView = [[UIWebView alloc] initWithFrame:frame]; [videoView loadHTMLString:html baseURL:baseURL]; [self.view addSubview:videoView]; [videoView release]; } Its the first time that I use UIWebView and the first time that I use video in iPhone. The video plays, so that's working BUT: I want to have an old school tv (round corners) in foreground with switches and so on. The tv is an image with transparent pixels in the middle, so that a video lying behind the tv will shine through as if the video would be shown on the tv. But first of all the video has a border that I can't remove and second it's always in the foreground. In Safari and in Firefox and Mac it's working. So is it an iPhone thing, could it be that it simply won't work on iPhone ? Or do I have some css/html typos ?

    Read the article

  • Fail to load NPAPI plugin in Google Chrome on Mac OS X

    - by Roman
    I have been trying to get Google Chrome (6.0.401.1 dev) on Mac OS X to load an NPAPI plugin without success so far. I have been working around the npsimple example from here: http://git.webvm.net/?p=npsimple. Using gcc on Mac and VC++ 2008 on Windows I managed to get it running on Safari and Firefox on Mac OS X and Firefox and Google Chrome on Windows, but not on Google Chrome on Mac OS X. When trying to debug Google Chrome on Mac OS X it seemed Google Chrome was briefly dyld-loading (and immediately dyld-unloading) the plugin on startup, but without actually looking-up any symbols within the plugin or calling any of the functions. It seemed to be doing that for every plugin, though. Also, when loading a page with the embed-tag for the plugin, Google Chrome did not seem to even dyld-load the plugin and no functions were called (not even NP_GetEntryPoints). Google Chrome also does not output any error message, it just simply does not load the plugin. I am not sure I caught everything with gdb because of Google Chrome using different processes, but I have also tried all the switches like --no-sandbox, --single-process and --plugin-startup-dialog (which incidentally does not seem to work at all on Mac OS X). I also made sure the architecture of the binary matches (i.e. 32-bit for Google Chrome). Has anybody had similar problems before? Is there anything I am missing here, like a gcc switch when compiling or something? Any help would be greatly appreciated.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >