Search Results

Search found 26693 results on 1068 pages for 'back to basics'.

Page 473/1068 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • Need help diagnosing network performance issues

    - by tokes
    I am currently working in a developing country as a system analyst for a government department. My area of expertise is software projects, but I've come across a few issues with the network setup in my office. (Unfortunately, being a developing country, there's not a lot of professional help available for this sort of thing.) Most recently, I am trying to diagnose a problem with slowness on the network. Our office is connected to the internet via an ADSL wireless modem/router (called Router). The modem is connected via ethernet to a switch (called Switch). The modem also acts as a wireless access point (called Wireless1), though because it is in a room at the end of the floor, it's range is limited. There are ethernet ports installed around the office. The cables of these all lead back to the same switch. In closer vicinity to the bulk of the client computers, there is another wireless router that acts as an access point for those clients (called Wireless2). That router is connected via ethernet to a wall port, and therefore to Switch. There is also a Windows server which acts as a DNS server (called DNSBox) which is located in the same room and is connected directly to Switch. ---Internet----------| Router/Wireless1 192.168.10.1 ---------------| |----|=========| DNSBox | |-------------------- 192.168.10.4 --------------------| Switch |---Other clients---- | |-------------------- |----|=========| Wireless2 ------------------| 192.168.10.198 One final thing to mention about the network setup. All clients are configured with manual IP addresses. Their router/gateway is set to the IP address of Router, and their DNS server is set to the IP address of DNSBox (with a secondary IP set to an external IP - that of our ISP's DNS server). Here are the symptoms we are experiencing: Clients connected to Wireless2 AP experience slow and unstable connections to the internet. (Slow here is defined as speeds of ~1KB/s, though ping response times seem to be as normal.) Clients connected via ethernet to Switch also experience the same slowness. Clients connected to Wireless1 AP (i.e. connecting via wireless directly to the ADSL modem) experience normal connections to the internet. Clients connected via ethernet to Router (i.e. connecting via ethernet directly to the ADSL modem) also experience normal connections to the internet. I also tried to gauge the connection performance between two machines on the network via ethernet: A file transfer between two clients who were both directly connected to Switch was the fastest; A file transfer between one client directly connected to Switch, and one client directly connected to Router (which is directly connected to Switch) performed much slower; A file transfer between two clients directly connected to Router also performed slowly. Things I have attempted to diagnose the problem: Restarted Switch -- no change. We tried unplugging ethernet jacks from Switch 4 at a time and testing the internet connection. The thought here was that perhaps a client on the network has contracted a virus, and is possibly spamming the network with traffic? (Not very technical, I know.) Unfortunately we couldn't get any significant increases in performance using this method. There were a couple of times when it seemed to be better, but then the connection speed quickly dropped back to slow/dead pace. I didn't want to unplug all jacks from Switch because I was concerned that users might be affected or that I would re-plug in the jacks incorrectly (should I even be worried about that? a port is a port on a switch, right?) I tried swapping the ethernet cable used to connect Router to Switch -- no change in performance. I tried swapping the port used on Switch for Router -- no change in performance. Anyone got any ideas on what this could be? Should I be mentioning specific brand names/models of the hardware used? Virii outbreaks are common in this country/office -- what could I be doing to figure out if a virus is at fault? If it is a virus, it doesn't seem to be generating a lot of traffic to/from the internet, because a) I can still get a good speed if I am directly connected to Router / Wireless1 and b) our ISP data usage has not risen suspiciously. Thanks for your help! Update #1 Here are the specs of some of the hardware: Switch is an Edimax ES3132RL 32-Port 10/100 Rackmount Switch Router is a D-Link DSL-G604T Update #2 I just tried unplugging everything except a laptop and Router from Switch. Speeds are still slow. I guess that means that Router / Switch are not being flooded? It seems more and more likely that the cause is something to do with the interaction between Router and Switch. However, I still can't find any useful resources on setting the LAN speed for either (and I'm not well-versed in these advanced networking configurations).

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Oracle Database Security Protecting the Oracle IRM Schema

    - by Simon Thorpe
    Acquiring the Information Rights Management technology in 2006 was part of Oracle's strategic security vision and IRM compliments nicely the overall Oracle security set of solutions. A year ago I spoke about how Oracle has solutions that can help companies protect information throughout its entire life cycle. With our acquisition of Sun this set of solutions has solidified and has even extended down to the operating system and hardware level. Oracle can now offer customers technology that protects their data from the disk, through the database to documents on the desktop! With the recent release of Oracle IRM 11g I was tasked to configure demonstration and evaluation environments and I thought it would make a nice story to leverage some of the security features in the latest release of the Oracle Database. After building these environments I thought I would put together a simple video demonstrating how both Database Advanced Security and Information Rights Management combined can provide a very secure platform for protecting your information. Have a look at the following which highlights these database security options.Transparent Data Encryption protecting the communication from the Oracle IRM server to the Database server. Encryption techniques provide confidentiality and integrity of the data passing to and from the IRM service on the back end. Transparent Data Encryption protecting the Oracle IRM database schema. Encryption is used to provide confidentiality of the IRM data whilst it resides at rest in the database table space. Database Vault is used to ensure only the Oracle IRM service has access to query and update the information that resides in the database. This is an excellent method of ensuring that database administrators cannot look at or make changes to the Oracle IRM database whilst retaining their ability to administrate the database. The last thing you want after deploying an IRM solution is for a curious or unhappy DBA to run a query that grants them rights to your company financial data or documents pertaining to a merger or acquisition.

    Read the article

  • Using Fiddler with BizTalk's HTTP Adapter

    - by Christopher House
    I'm working on an orchestration that's retrieving some data from a Java servlet.  The servlet takes a parameter string via HTTP post and returns POX (plain old XML, no SOAP here).  I was having trouble getting a valid response from the servlet when I was sending some test messages and wanted to see what my messages were looking like as they went across the wire.  Normally I was using WCF, I'd setup message logging, but since that's obviously not an option with the HTTP adapter, my thoughts turned to Fiddler.  A quick Google search turned up some promising results.  The posts I read all referred to using Fiddler with the SOAP adapter, but I thoght I could apply the same ideas to the HTTP adapter.  This led me to try setting the following context properties: HttpRequestMessage(HTTP.UseProxy) = true; HttpRequestMessage(HTTP.ProxyName) = "127.0.0.1"; HttpRequestMessage(HTTP.ProxyPort) = 8888; I rebuilt my orch, gac'd it, bounced my host and tried submitting a test message.  Fiddler was running but I didn't see any traffic show up.  I tried fully undeploying/redeploying my application and still, no traffic in Fiddler.  I was starting to think that BizTalk was ignoring the proxy settings.  To confirm this, I closed Fiddler and submitted a test message.  Sure enough, the orch ran to completion, proving that BizTalk was ignoring the proxy settings. I went back to my orch to see if there could be any other context proprties I needed to set.  I saw one that looked promising:  HTTP.UseHandlerProxySettings.  I set this to false, rebuilt my orch and this time when I submitted, I got an error message, which made sense, I didn't have Fiddler running.  I started up Fiddler, submitted another message and there it was, my HTTP traffic, just as I hoped.  And, I was quickly able to figure out what the problem was...I had forgotten to set HTTP.ContentType to application/x-www-form-urlencoded.

    Read the article

  • "TMGR is Missing" after repair-installing Windows XP

    - by djzmo
    Hello there, I have two OSes installed in my computer. - Windows XP Professional - Windows 7 Ultimate (Release Candidate 1/Build 7100) I used the Windows 7 boot loader by default to choose between OSes. When I was using my WinXP, my computer gets lagged suddenly and continuously, and the only way to fix it is by repair-installing it (because I've experienced this many times before, but without W7 installed). Everything goes OK. But when my XP was successfully reinstalled, I cannot boot my Windows 7 anymore. Every time I tried to boot the harddisk that contains W7, an error appeared. "TMGR is Missing". Now I have no idea how can I get back to my Windows 7. Any kind of help would be appreciated! :)

    Read the article

  • Page Speed and it&rsquo;s affect on your business

    - by ihaynes
    Originally posted on: http://geekswithblogs.net/ihaynes/archive/2014/05/29/page-speed-and-itrsquos-affect-on-your-business.aspxPage speed was an important issue 10 years ago, when we all had slow modems, but became less so with the advent of fast broadband connections that even seemed to make Ajax unnecessary.  Then along came the mobile internet and we’re back to a world where page speed and asset optimisation are critical again. If you doubt this an article on SitePoint discussing ’Page Speed and Business Metrics’ may change your mind. http://www.sitepoint.com/page-speed-business-metrics/ Here are some of the figures it quotes: Walmart – saw a 2% increase in conversions for every second of improvement in page load time. Put another way, accumulated growth of revenues went up 1% for every 100 milliseconds of load time improvement. Yahoo – for every 400 milliseconds of improvement, the site traffic increased by 9%. The bottom line is that if people have to wait more than a few seconds for a page to fully render, particularly on a mobile device, they’ll probably go elsewhere. Ignore this at your peril.   For two previous posts on the subject see: Page Weight: 10 easy fixes Xat.com Image Optimiser – Useful for RWD/Mobile

    Read the article

  • hyper-v fails when attaching more disk to VM. The VM won't start and generates an error

    - by CasperDK
    I'm lost at what to do about this: Hi... System: Windows 2008 R2 Hyper-V farm running with failover cluster with a EVA 4400 as backend. When I attach a new disk to a VM it fails when I try to start it. If I move the VM to another, say node 1, I can add the disk and I can get them to start. If I move the VM back to node 2 where the problem arose and the VM is running, I get an error during live migration and the VM fails back to node1 where it did run... So it's like there is something wrong with Hyper-V on node 2 and not node 1. Also node 3 has the same issue. Restarting the nodes is NOT an option since I will have this problem again at a later time AND because not all the VMs can run on node 1 which means my client company will experience downtime on the VMs not running on node 1. Any fix for this? An update I have missed perhaps? It has been two years... Here are the errors: An error ocurred while attempting to change the state of virtual machine XXX. 'XXX' failed to start. Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' 'XXX' failed to start. (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) An error ocurred while attempting to change the state of virtual machine XXX. 'XXX' failed to start. Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' 'XXX' failed to start. (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) An error ocurred while attempting to change the state of virtual machine XXX. 'XXX' failed to start. Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' Failed to open attachment 'c:\clusterstorage/volume1/XXX.vhd'. Error: 'A device attached to the system is not functioning.' Failed to open attachment 'c:\clusterstorage/volume1\XXX.vhd'. Error: 'A device attached to the system is not functioning.' 'XXX' failed to start. (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'c:\clusterstorage/volume1\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'c:\clusterstorage/volume1\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) In the Hyper-V logs I found some more errors: In the hyper-v VMMS logs I have this: 'ServerName' failed to perform the operation. The virtual machine is not in a valid state to perform the operation. (Virtual machine ID 0A6CC4A9-39D6-4413-8CF0-B6DAA35B68D7)

    Read the article

  • IIS7 Custom ASP.NET Errors

    - by Nathan Roe
    I'm trying to setup a custom error page for the IIS 7 404.13 (Content length too large) error. Here's the relevant sections of my web.config file: <system.webServer> <httpErrors errorMode="Custom" existingResponse="Replace"> <remove statusCode="404" subStatusCode="13" /> <error statusCode="404" subStatusCode="13" prefixLanguageFilePath="" path="/FileUpload/Test.aspx" responseMode="ExecuteURL" /> </httpErrors> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10240" /> </requestFiltering> </security> </system.webServer> The response that is being sent back to the server is blank. The Test.aspx file is not blank. Any idea what's going on here?

    Read the article

  • xcopy and timeouts

    - by acidzombie24
    Related to this question and this answer http://serverfault.com/questions/48839/backup-on-disc-using-truecrypt-corruption-problem/50829#50829 I want to copy the backup to my HD. However the timeout seems to be a problem. I see many single files then i see File creation error - Data error (cyclic redundancy check). and its currently trying to copy the next file. When i click my computer i see nothing and it also locks up for some seconds (30 maybe) So TrueCrypt is not a good solution if i must be able to copy files back from a volume with a corrupted sector? (answer in other thread please). Is there any way for me to change the timeout? I look here for some flags and didnt see any http://www.scriptlogic.com/support/CustomScripts/XCOPYCommandLineParameters.html

    Read the article

  • Oracle CRM On Demand R17 and Pharma's Future

    - by charles.knapp
    By Denis Pombriant, Beagle Research, March 30 "Oracle announced Release 17 of CRM On-Demand today along with an updated vertical market version for the pharmaceutical industry. Seventeen is a lot of releases even for a SaaS company and Oracle should be proud of the milestone. The same is true of the emphasis on the pharmaceutical industry vertical. Oracle comes to the pharma CRM market with an assist from Siebel, the one time independent leader in CRM that Oracle bought a few years back. Before the acquisition Siebel and its pharma package had managed to corner about nineteen of the top twenty pharmaceutical companies. For a time in the last decade you could go from job to job as a pharma rep taking your Siebel skills with you and feel right at home. The writing on the wall now though is that pharmaceutical sales is transitioning to a SaaS model and Oracle is managing the transition for its customers. Oracle's done a good job of keeping up with changes in the industry and you have to admit that pharma sales is a different kettle of fish than almost anything else in CRM." For additional insights, read here.

    Read the article

  • Not Playing Nice Together

    - by David Douglass
    One of the things I’ve noticed is that two industry trends are not playing nice together, those trends being multi-core CPUs and massive hard drives.  It’s not a problem if you keep your cores busy with compute intensive work, but for software developers the beauty of multi-core CPUs (along with gobs of RAM and a 64 bit OS) is virtualization.  But when you have only one hard drive (who needs another when it holds 2 TB of data?) you wind up with a serious hard drive bottleneck.  A solid state drive would definitely help, and might even be a complete solution, but the cost is ridiculous.  Two TB of solid state storage will set you back around $7,000!  A spinning 2 TB drive is only $150. I see a couple of solutions for this.  One is the mainframe concept of near and far storage: put the stuff that will be heavily access on a solid state drive and the rest on a spinning drive.  Another solution is multiple spinning drives.  Instead of a single 2 TB drive, get four 500 GB drives.  In total, the four 500 GB drives will cost about $100 more than the single 2 TB drive.  You’ll need to be smart about what drive you place things on so that the load is spread evenly.  Another option, for better performance, would be four 10,000 RPM 300 GB drives, but that would cost about $800 more than the singe 2 TB drive and would deliver only 1.2 TB of space. All pricing based on Microcenter as of March 14, 2010.

    Read the article

  • FY13 Partner Kickoff Kick’s off Summer Right

    - by Kristin Rose
    This summer’s blockbuster movie lineup is far from disappointing – From the Avengers to Prometheus and The Dark Knight Rises, there is no shortage of ‘cling-to-your-seat’ entertainment in store, not to mention buttery popcorn fingers. Will all this big screen action taking place, Oracle wanted to take part in some big premiers of its own, which is why we are happy to announce that our FY13 Partner Kickoff event is taking place June 26th. This year we are welcoming several partners from around the globe in person to Oracle’s Headquarters, as well as another 22,000 partners tuning in to help us kickoff FY13. Hosted by Judson Althoff, SVP of WWA&C, the Oracle PartnerNetwork FY13 Kickoff is being held live — five times throughout the day — and will include a special message for each region.  Have a look at the schedule of shows below: EMEA Kickoff – Tuesday, June 26 @ 2:00 pm BST (London) LAD Kickoff – Tuesday, June 26 @ 4:00 pm UTC (San Paulo) North America Kickoff – Tuesday, June 26 @ 8:30 am PT (San Francisco) Japan Kickoff – Wednesday, June 27 @ 10:00 am JST (Tokyo) Asia Pacific Kickoff – Wednesday, June 27 @ 8:30 am IST (Bangalore) / 11:00 am SGT (Singapore) / 1:00 pm AEST (Sydney) Partners near and far will be able to get a first row seat to some exciting Oracle announcements, keynotes, round-tables and a live after-show event hosted by Nick Kritikos, VP of Partner Enablement. Did we mention there is an exciting online component which will allow partners to send in questions or comments and get them answered in real time? Now that deserves two thumbs up!So whether you’re partial to Milk Duds or Junior Mints, grab a box of your favorite candy and sign-up for this strategy driven, partner focused blockbuster event. To get a sneak-peek at what’s in store, watch this short PKO “trailer” below, starring our very own GVP of WWA&C, Lydia Smyers to find out more.   To the Depths and Back,The OPN Communications Team

    Read the article

  • How to charge an iPhone on a Windows PC without Installing iTunes

    - by Martin Hollingsworth
    I want to be able to charge my iPhone on my work computer, which is running Windows Server 2008 SP1 64-Bit. When I plug the iPhone in with the USB cable, it will not charge. Windows attempts to locate a driver for the device but only comes up with a generic Camera device and even if I allow that to be installed, the iPhone still does not charge. I have checked the computer's BIOS settings and did not find anything relating to power on the USB devices. I also tried this on ports at the back of the computer in addition to those on the front. The PC is a Dell Optiplex 780. As far as I can tell USB devices do not charge unless Windows has installed an appropriate driver. Since it is a work computer I do not want to install iTunes which does include a driver. I have a workaround that I will post as an answer for reference.

    Read the article

  • Files missing after copying from one sdcard to another using Ubuntu 12.04

    - by Abhi Kandoi
    I have two Kingston sdcards 8GB and 32GB. The 8GB for HTC Sensation and 32GB for my Tablet (Chinese). I soon ran out of space on my phone and decided to swap the data between the sdcards. I copied the data of 32GB sdcard to my PC(with Ubuntu 12.04) deleted everything on the 32GB sdcard and copied data from the 8GB sdcard to the other sdcard. At last I copied the data from my PC to the 8GB sdcard(before this I deleted everything that was on it before). Now the problem is that the 8GB sdcard has the data exactly the same, but the 32GB sdcard has some missing data. All my photos(not copied or posted on Facebook) and Songs are lost. Please help as to what I can do to get my data back? I have Android Revolution HD (ICS 4.0.3) on my rooted HTC Sensation (India). I suspect it has something to do with my data loss.

    Read the article

  • Software for "High-level" source code (C++) Management

    - by Korchkidu
    after a lot of small-medium projects, I have a lot of libraries and test programs here and there. Also, I must admit that some of the "best practices" I learnt are not that "good" IMHO. In particular, documenting your code and making a "high-level" documentation is not useful in practice: High-level documentation are not maintain up to date = I prefer to read the source code directly; Browsing generated developer documentation is a pain (IMHO) = I prefer to read the source code directly. For that reason, I am looking for a tool who could help me organize all my source code directories in a more "readable manner". What I need is a tool which: Maintains an UML diagram from C++ source code. I don't need source code generation from UML; USE CASE: I am in this super-tool, I notice a design issue, I change the source code, when I get back, the UML diagram is updated; Maintains easily browsable call graphs; Lists references to methods, variables, etc. For example, when I want to see where/when a method is called; Helps writing pseudo-code from C++; Embeds a nice C++ source code browser; Is Open Source would be great; Works at least on Win7. The focus of this tool should be to browse source code to understand what's going on. For example, when you have a newcomer and you need him to go through source code. Do you know any great tool? Thanks in advance. PS: please do not answer doxygen (great tool however).

    Read the article

  • IIS 7.0 Web Deploy authentication fails after changing Windows password... help?

    - by Lucifer Sam
    I have a very basic Windows 2008 R2 Web Server running IIS 7.0. This is just a test/practice server, so I enabled Web Deployment using Windows Authentication. All was well and I was able to deploy easily from VS 2010 using the Administrator account credentials. After changing the Administrator account password, I get the following error when trying to deploy from Visual Studio (using the new password, of course): Error 1 Web deployment task failed... ...An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected. The remote server returned an error: (401) Unauthorized. If I change the Administrator password back to the original one and try to publish using it everything works fine again. So what am I missing? Am I supposed to do something in IIS after changing the password? Thanks!

    Read the article

  • 3 Monitors powered from a Nvidia fx580

    - by user26067
    I have a NVidia FX580 card in a Dell Precision T1500. The outputs of the back of the card are DVI and 2x displayport. I tried to run 3 screens from the DVI + 2x displayport to dvi adaptors, but the card won't have it... It'll only run 2 of these screens at the same time. http://www.nvidia.com/object/product_quadro_fx_580_us.html says; **Display Support:** Dual Link DVI-I 1 DisplayPort 2 # of Digital Outputs 3 ( 2 out of 3 active at a time ) # of Analog Outputs 1 To me this reads that it will be able to power 2xdvi monitors and 1xvga monitor. Anyone care to confirm or speculate?

    Read the article

  • So i ran sudo apt-get install kubuntu-full on my Ubuntu... and saw all the apps...now I want it off...help?

    - by Alex Poulos
    I'm running 12.04 - I installed kubuntu to try it out and realized that with all the bloatware applications that I didn't want it anymore - I was able to uninstall the kubuntu-desktop but there are still packages left over... How can I make sure I get rid of EVERYTHING Kubuntu installed - even the kde leftovers? Here's some of what's left when I ran sudo apt-get autoremove kde then "tab" it displayed this: kdeaccessibility kdepim-runtime kdeadmin kde-runtime kde-baseapps kde-runtime-data kde-baseapps-bin kdesdk-dolphin-plugins kde-baseapps-data kde-style-oxygen kde-config-cron kdesudo kde-config-gtk kdeutils kde-config-touchpad kde-wallpapers kdegames-card-data kde-wallpapers-default kdegames-card-data-extra kde-window-manager kde-icons-mono kde-window-manager-common kdelibs5-data kde-workspace kdelibs5-plugins kde-workspace-bin kdelibs-bin kde-workspace-data kdemultimedia-kio-plugins kde-workspace-data-extras kdenetwork kde-workspace-kgreet-plugins kdenetwork-filesharing kde-zeroconf kdepasswd kdf kdepim-kresources kdm kdepimlibs-kio-plugins kdoctools Those are all installed by kubuntu... correct? I just want to go back to my Ubuntu 12.04LTS with Gnome2-classic and without all the kubuntu extras. I started it off by just removing unnecessary apps that came with kubuntu-full - then realized I didnt want the whole thing at all and uninstalled kubuntu-full but it still says I have these as well: alex@griever:~$ sudo apt-get --purge remove kubuntu- kubuntu-debug-installer kubuntu-netbook-default-settings kubuntu-default-settings kubuntu-notification-helper kubuntu-firefox-installer kubuntu-web-shortcuts

    Read the article

  • Software to Monitor the Stability of Internet Connection

    - by Ngu Soon Hui
    Thanks to the excellent internet connection service offered by one of the best ISP in the world, the internet connection in my area is very, very unstable. I can connect some of the time, but MOST of the time the connection will just drop off ( with the error message unable to resolve host) and after a few minutes, it will resume back. If I ping the domain name directly (i.e., ping www.google.com -t in cmd command), I will get a cannot ping message. Because of the flickery nature of the connection, it's pretty hard to prove to the support staff that internet connection is unstable. So I am thinking about using one software to record down the connection situation, so that I can present to the technical staff and make sure that they have no excuse not to fix my problem. Any such software available? Edit: Of course, such software should not record my browsing habit, and must be able to monitor and record the internet connection condition even when I am not online.

    Read the article

  • How many different servers are needed to keep a website running with no downtime? [closed]

    - by Mason Wheeler
    Machines go down. It's a fact of life. They may need to be rebooted for some reason, or they may have a hardware failure, or a power outage. So if I wanted to deploy a website with a server backed by a SQL database, putting the whole thing on one server wouldn't be good enough. It obviously needs at least two servers, so that if one goes down, the other can pick up the slack until the first comes back up. Of course, if I have the server software on two machines, either one of which could go down, I can't place the database on either of those two machines, because it could go down. So the database needs its own server. But that server can go down, so I need a backup database server and some sort of replication system to keep it in sync so the main can fail-over to it. So far, that's a bare minimum of 4 machines to keep one website running with a reasonable chance of no downtime. (Assuming no catastrophic events take place that take down both front-end servers at once or both DB servers at once, and no hacks, DDOS attacks, etc. Am I missing any other factors, or should I consider 4 servers to be the minimum for running a website with a goal of continuing operation without downtime even when a server goes down?

    Read the article

  • Geek it Up

    - by BuckWoody
    I’ve run into a couple of kinds of folks in IT. Some really like technology a lot – a whole lot –and others treat it more as a job. For those of you in the second camp, you can go back to your drab, meaningless jobs – this post is for the first group. I’m a geek. Not a little bit of a geek, a really big one. I love technology, I get excited about science and electronics in general, and I read math books when I don’t have to. Yes, I have a Star Trek item or two around the house. My daughter is fluent in both Monty Python AND Serenity. I totally admit it. So if you’re like me (OK, maybe a little less geeky than that), then go for it. Put those toys in your cubicle, wear your fan shirt, but most of all, geek up your tools. No, this isn’t an April Fool’s post – I really mean it. I’ve noticed that when I get the larger monitor, better mouse, cooler keyboard, I LIKE coming to work. It’s a way to reward yourself – I’ve even found that it makes work easier if I have the kind of things I enjoy around to work with. So buy that old “clicky” IBM keyboard, get three monitors, and buy a nice headset so that you can set all of your sounds to Monty Python WAV’s. And get to work. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Getting Started with ASP.NET MVC 3 and Razor

    - by dwahlin
    I had a chance to give a talk on ASP.NET MVC 3, Razor and jQuery today at a company and wanted to post the slides and demos from the talk. The focus was on getting started with ASP.NET MVC 3 projects and .cshtml files including creating pages using the new Razor syntax (which I personally love….never going back to the Web Forms View Engine) as well as working with jQuery. Topics covered in the demos (download below) include: Binding form data to custom object properties Validating a model using data annotations and IValidatableObject Integrating jQuery into MVC sites (using the DataTables plugin) Using the new WebGrid class to generate tables with sorting and paging functionality Integrating Silverlight applications into MVC sites Exposing JSON data from a controller action and consuming it in Silverlight Using the Ajax helper to add AJAX functionality (without jQuery)     The code and slides from the talk can be downloaded here.     If you or your company is interested in training, consulting or mentoring on jQuery or .NET technologies please visit http://www.thewahlingroup.com for more information. We’ve provided training, consulting and mentoring services to some of the largest companies in the world and would enjoy sharing our knowledge and real-world lessons learned with you.

    Read the article

  • Attend my Fusion sessions

    - by Daniel Moth
    The inaugural Fusion conference was 1 year ago in June 2011 and I was there doing a demo in the keynote, and also presenting a breakout session. If you look at the abstract and title for that session you won't see the term "C++ AMP" in there because the technology wasn't announced and we didn't want to spill the beans ahead of the keynote, where the technology was announced. It was only an announcement, we did not give any bits out, and in fact the first bits came three months later in September 2011 with the Beta following in February 2012. So it really feels great 1 year later, to be back at Fusion presenting two sessions on C++ AMP, demonstrating our progress from that announcement, to the Visual Studio 2012 Release Candidate that came out last week. If you are attending Fusion (in person or virtually later), be sure to watch my two-part session. Part 1 is PT-3601 on Tuesday 4pm and part 2 is PT-3602 on Wednesday 4pm. Here is the shared abstract for both parts: Harnessing GPU Compute with C++ AMP C++ AMP is an open specification for taking advantage of accelerators like the GPU. In this session we will explore the C++ AMP implementation in Microsoft Visual Studio 2012. After a quick overview of the technology understanding its goals and its differentiation compared with other approaches, we will dive into the programming model and its modern C++ API. This is a code heavy, interactive, two-part session, where every part of the library will be explained. Demos will include showing off the richest parallel and GPU debugging story on the market, in the upcoming Visual Studio release. See you there! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Ubuntu 12.04.1 Radeon 9550 stuck with 640x480, works in Geexbox

    - by Betty
    I am a complete new user trying to set up Ubuntu on an old desktop. It has an AGP Radeon 9550 graphics card. I am running Ubuntu from a USB drive with persistence as the PC currently has no hard drive I seem to be stuck in 640*480 mode. The desktop itself is larger, but the monitor display is stuck on 640*480. In SettingsDisplays, only the 640*480 option is available. What I have found out so far: The proprietary ati drivers no longer support my card. If 3D isn't an issue (it's not) the open source driver should be fine. This should be installed by default so in theory I am using it already xserver-xconf/pci/*.ids doesn't show any entries for the card's PCI id. hardware additional drivers show no proprietary drivers I tried the booting into the current version of Geexbox from a USB stick and this set the resolution correctly by default so I know it can be done, but I know no idea how. How can I tell what driver the card is using, and how can I get the higher resolutions back?

    Read the article

  • lsyncd + csync2 : cluster of 3 or more nodes

    - by sbrattla
    I've got 3 (and potentially more) web servers hosting the same content (fronted by a load balancer). Thus, I need to make sure that files on these web servers are the same. It appears that csync2 in combination with lsyncd is able to do synchronize a cluster of nodes, but according to this article there's a problem with cyclic events in such a setup. In other words, the author writes that a file change on one machine would trigger a replication event to other machines, which again would trigger a replication event back to the original machine. It appears that this is a consequence of the setup which uses lsyncd (and inotify) to catch file modification events and from there trigger csync2 to replicate the file tree. Does anyone have experience with lsyncd in combination with csync2. Have you had trouble with cyclic events?

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >