Search Results

Search found 5011 results on 201 pages for 'grand master t'.

Page 135/201 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Tab Sweep - Upgrade to Java EE 6, Groovy NetBeans, JSR310, JCache interview, OEPE, and more

    - by alexismp
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Implementing JSR 310 (New Date/Time API) in Java 8 Is Very Strongly Favored by Developers (java.net) • Upgrading To The Java EE 6 Web Profile (Roger) • NetBeans for Groovy (blogs.oracle.com) • Client Side MOXy JSON Binding Explained (Blaise) • Control CDI Containers in SE and EE (Strub) • Java EE on Google App Engine: CDI to the Rescue - Aleš Justin (jaxenter) • The Java EE 6 Example - Testing Galleria - Part 4 (Markus) • Why is OpenWebBeans so fast? (Strub) • Welcome to the new Oracle Enterprise Pack for Eclipse Blog (blogs.oracle.com) • Java Spotlight Episode 75: Greg Luck on JSR 107 Java Temporary Caching API (Spotlight Podcast) • Glassfish cluster installation and administration on top of SSH + public key (Paulo) • Jfokus 2012 on Parleys.com (Parleys) • Java Tuning in a Nutshell - Part 1 (Rupesh) • New Features in Fork/Join from Java Concurrency Master, Doug Lea (DZone) • A Java7 Grammar for VisualLangLab (Sanjay) • Glassfish version 3.1.2: Secure Admin must be enabled to access the DAS remotely (Charlee) • Oracle Announces the Certification of the Oracle Database on Oracle Linux 6 and Red Hat Enterprise Linux 6

    Read the article

  • Sharing A Stage: JDeveloper/ADF & NetBeans/Java EE 6?

    - by Geertjan
    A highlight for me during last week's Oracle Developer Day in Romania (which I blogged about here) was meeting Jernej Kaše (who is from Slovenia, just like my philosopher hero Slavoj Žižek), who is an Oracle Fusion Middleware evangelist. At the conference, while I was presenting NetBeans and Java EE 6 in one room, Jernej was presenting JDeveloper and ADF in another room. The application he created looks as follows, i.e., a realistic CRUD app, with a master/detail view, a search feature, and validation: In a conversation during a break, we started imagining a scenario where the two of us would be on the same stage, taking turns talking about NetBeans/Java EE and JDeveloper/ADF. In that way, attendees at a conference wouldn't need to choose which of the two topics to attend, because they'd be handled in the same session, with the session possibly being longer so that sufficient time could be spent on the respective technologies. (The JDeveloper/ADF session would then not be competing with the NetBeans/Java EE 6 session, since they'd be handled simultaneously.) The session would focus on the similarities/differences between the two respective tools/solutions, which would be extremely interesting and also unique. The crucial question in making this kind of co-presentation possible is whether (and how quickly) an application such as the one created above with JDeveloper/ADF could be created with NetBeans/Java EE 6. The NetBeans/Java EE 6 story is extremely strong on the model and controler levels, but less strong on the view layer. Though there are choices between using PrimeFaces, RichFaces, and IceFaces, that support is quite limited in the absence of a visual designer or of other specific tools (e.g., code generators to generate snippets of PrimeFaces) connected to JSF component libraries. However, it so happens that in recent months we at NetBeans have established really good connections with the PrimeFaces team (more about that another time). So I asked them what it would take to write the above UI in PrimeFaces. The PrimeFaces team were very helpful. They sent me the following screenshot, which is of the UI they created in PrimeFaces, reproducing the ADF screenshot above: Of course, the above is purely the UI layer, there's no EJB and entity classes and data connection hooked into it yet. However, this is the Facelets file that the PrimeFaces team sent me, i.e., using the PrimeFaces component library, that produces the above result: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:p="http://primefaces.org/ui"> <f:view> <h:head> <style type="text/css"> .alignRight { text-align: right; } .alignLeft { text-align: left; } .alignTop { vertical-align: top; } .ui-validation-required { color: red; font-size: 14px; margin-right: 5px; position: relative; vertical-align: top; } .ui-selectonemenu .ui-selectonemenu-trigger .ui-icon { margin-top: 7px !important; } </style> </h:head> <h:body> <h:form prependId="false" id="form"> <p:panel header="Employees"> <h:panelGrid columns="4" id="searchPanel"> Search <p:selectOneMenu> <f:selectItem itemLabel="FirstName" itemValue="FirstName" /> <f:selectItem itemLabel="LastName" itemValue="LastName" /> <f:selectItem itemLabel="Email" itemValue="Email" /> <f:selectItem itemLabel="PhoneNumber" itemValue="PhoneNumber" /> </p:selectOneMenu> <p:inputText /> <p:commandLink process="searchPanel" update="@form"> <h:graphicImage name="next.gif" library="img" /> </p:commandLink> </h:panelGrid> <h:panelGrid columns="3" columnClasses="alignTop,,alignTop" style="width:90%;margin-left:10%"> <h:panelGrid columns="2" columnClasses="alignRight,alignLeft"> <h:outputLabel for="firstName">FirstName</h:outputLabel> <p:inputText id="firstName" /> <h:outputLabel for="lastName"> <sup class="ui-validation-required">*</sup>LastName </h:outputLabel> <p:inputText id="lastName" style="width:250px;" /> <h:outputLabel for="email"> <sup class="ui-validation-required">*</sup>Email </h:outputLabel> <p:inputText id="email" style="width:250px;" /> <h:outputLabel for="phoneNumber" value="PhoneNumber" /> <p:inputMask id="phoneNumber" mask="999.999.9999" /> <h:outputLabel for="hireDate"> <sup class="ui-validation-required">*</sup>HireDate</h:outputLabel> <p:calendar id="hireDate" pattern="MM/dd/yyyy" showOn="button" /> </h:panelGrid> <p:outputPanel style="min-width:40px;" /> <h:panelGrid columns="2" columnClasses="alignRight,alignLeft"> <h:outputLabel for="jobId"> <sup class="ui-validation-required">*</sup>JobId </h:outputLabel> <p:selectOneMenu id="jobId" > <f:selectItem itemLabel="Administration Vice President" itemValue="Administration Vice President" /> <f:selectItem itemLabel="Vice President" itemValue="Vice President" /> </p:selectOneMenu> <h:outputLabel for="salary">Salary</h:outputLabel> <p:inputText id="salary" styleClass="alignRight" /> <h:outputLabel for="commissionPct">CommissionPct</h:outputLabel> <p:inputText id="commissionPct" style="width:30px;" maxlength="3" /> <h:outputLabel for="manager">ManagerId</h:outputLabel> <p:selectOneMenu id="manager"> <f:selectItem itemLabel="Steven King" itemValue="Steven" /> <f:selectItem itemLabel="Michael Cook" itemValue="Michael" /> <f:selectItem itemLabel="John Benjamin" itemValue="John" /> <f:selectItem itemLabel="Dav Glass" itemValue="Dav" /> </p:selectOneMenu> <h:outputLabel for="department">DepartmentId</h:outputLabel> <p:selectOneMenu id="department"> <f:selectItem itemLabel="90" itemValue="90" /> <f:selectItem itemLabel="80" itemValue="80" /> <f:selectItem itemLabel="70" itemValue="70" /> <f:selectItem itemLabel="60" itemValue="60" /> <f:selectItem itemLabel="50" itemValue="50" /> <f:selectItem itemLabel="40" itemValue="40" /> <f:selectItem itemLabel="30" itemValue="30" /> <f:selectItem itemLabel="20" itemValue="20" /> </p:selectOneMenu> </h:panelGrid> </h:panelGrid> <p:outputPanel id="buttonPanel"> <p:commandButton value="First" process="@this" update="@form" /> <p:commandButton value="Previous" process="@this" update="@form" style="margin-left:15px;" /> <p:commandButton value="Next" process="@this" update="@form" style="margin-left:15px;" /> <p:commandButton value="Last" process="@this" update="@form" style="margin-left:15px;" /> </p:outputPanel> <p:tabView style="margin-top:25px"> <p:tab title="Job History"> <p:dataTable var="history"> <p:column headerText="StartDate"> <h:outputText value="#{history.startDate}"> <f:convertDateTime pattern="MM/dd/yyyy" /> </h:outputText> </p:column> <p:column headerText="EndDate"> <h:outputText value="#{history.endDate}"> <f:convertDateTime pattern="MM/dd/yyyy" /> </h:outputText> </p:column> <p:column headerText="JobId"> <h:outputText value="#{history.jobId}" /> </p:column> <p:column headerText="DepartmentId"> <h:outputText value="#{history.departmentIdId}" /> </p:column> </p:dataTable> </p:tab> </p:tabView> </p:panel> </h:form> </h:body> </f:view> </html> Right now, NetBeans IDE only has code completion to create the above. So there's not much help for creating such a UI right now. I don't believe that a visual designer is mandatory to create the above. A few code generators and file templates could do the job too. And I'm looking forward to seeing those kinds of tools for PrimeFaces, as well as other JSF component libraries, appearing in NetBeans IDE in upcoming releases. A related option would be for the NetBeans generated CRUD app to include the option of having a master/detail view, as well as the option of having a search feature, i.e., the application generators would provide the option of having additional features typical in Java enterprise apps. In the absence of such tools, there still is room, I believe, for NetBeans/Java EE and JDeveloper/ADF sharing a stage at a conference. The above file would have been prepared up front and the presenter would state that fact. The UI layer is only one aspect of a Java EE 6 application, so that the presenter would have ample other features to show (i.e., the entity class generation, the tools for working with servlets, with session beans, etc) prior to getting to the point where the statement would be made: "On the UI layer, I have prepared this Facelets file, which I will now show you can be connected to the lower layers of the application as follows." At that point, the session beans could be hooked into the Facelets file, the file would be saved, the browser refreshed, and then the whole application would work exactly as the ADF application does. So, Jernej, let's share a stage soon!

    Read the article

  • Is It Time To Specialize?

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/18/is-it-time-to-specialize.aspx Over my career I have made a living as a generalist.  I have been a jack of all trades and a master of none.  It has served me well in that I am able to move from one technology to the other quickly and make myself productive.  Where it becomes a problem is deep knowledge.  I am constantly digging for the things that aren’t basic knowledge.  How do you make a product like WCF or Windows RT do more than just “Hello World”? As an architect I need to be a jack of all trades.  This is what helps me to bring the big picture of a project into focus for developers with different skills to accomplish the goals of the project. It is a key when the mix technologies crosses Windows, Unix and Mainframe with different languages and databases.  The larger the company that the project is for the more likely this scenario will arise. As a consultant and a developer I need to have specialized skills in order to get the job done efficiently.  if I have a SharePoint or Windows Phone project knowing the object model details and possible roadblocks of the technology allow me to stay within budgets as well as better advise the client on technology decisions. What is the solution?  Constant learning and associating with developers who specialize in a variety of technologies is the best thing you can do.  You may have thought you were done with classes when you left college, but in this industry you need to constantly be learning new products and languages.  The ultimate answer is you must generally specialize.  Learn as many subject areas as possible, but go deep when ever you can.  Sleep is overrated.  Good luck. del.icio.us Tags: software development,software architecture,specialization,generalist

    Read the article

  • snmpd agent sends duplicate traps

    - by jsnmp
    I am on Ubuntu 10.04.4 LTS, and I cannot upgrade to a higher version. I have installed the snmpd agent (NET-SNMP version 5.4.2.1) with an apt-get install snmpd command. When an event occurs which sends a trap, two traps are sent for each such event instead of one. For example, when I shut down the agent with command /etc/init.d/snmpd stop, two shutdown traps are sent to the destination host. If I then start back up the agent with command /etc/init.d/snmpd start, then two cold start traps are sent to the destination host. Is this a known issue? Is there a fix for this, or is there a configuration change that is needed to prevent the sending of the duplicate trap? These are the contents of the /etc/snmp/snmpd.conf file: rocommunity public authtrapenable 1 trap2sink <trap destination hostname> public These are the contents of the /etc/default/snmpd file: # This file controls the activity of snmpd and snmptrapd # MIB directories. /usr/share/snmp/mibs is the default, but # including it here avoids some strange problems. export MIBDIRS=/usr/share/snmp/mibs # snmpd control (yes means start daemon). SNMPDRUN=yes # snmpd options (use syslog, close stdin/out/err). SNMPDOPTS='-Ls3d -Lf /dev/null -u snmp -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf' # snmptrapd control (yes means start daemon). As of net-snmp version # 5.0, master agentx support must be enabled in snmpd before snmptrapd # can be run. See snmpd.conf(5) for how to do this. TRAPDRUN=no # snmptrapd options (use syslog). TRAPDOPTS='-Lsd -p /var/run/snmptrapd.pid' # create symlink on Debian legacy location to official RFC path SNMPDCOMPAT=yes

    Read the article

  • SD Card Reader not working in Ubuntu 12.04

    - by tripkane
    I have a read many other posts on this issue and believe that Ubuntu 12.04 is not even recognizing my SD Card Reader as just that: Computer Model: Metabox (Australian builder of Clevo laptops) / Clevo P150EM OS: Ubuntu 12.04 (64 Bit) CPU: Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz HD: 120GB Intel 550/520MB/s SSD According to the people who built my computer, the specs of the SD Card reader in my comp are as follows: Manufacture: Realtek Semiconduct Corp. Location: PCI bus 3 Hardware ID: PCI\Ven_10EC&DEV_5289&SUBSYS_51051558 Physical device object name: \Device\NTPNP_PCI0015 Here are the relevant outputs of the following commands run from the terminal: sudo lshw *-generic UNCLAIMED description: Unassigned class product: Realtek Semiconductor Co., Ltd. vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 version: 01 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list configuration: latency=0 resources: memory:f6a00000-f6a0ffff sudo lspci -v -nn 03:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device [10ec:5289] (rev 01) Subsystem: CLEVO/KAPOK Computer Device [1558:5105] Flags: bus master, fast devsel, latency 0, IRQ 4 Memory at f6a00000 (32-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 00 Capabilities: [b0] MSI-X: Enable- Count=1 Masked- Capabilities: [d0] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 00-00-00-00-00-00-00-00 Does the unassigned details of these outputs mean that Ubunutu desn't know that the SD Card Reader is one and what do with it? and if so how should I go about fixing it?? Cheers ;)

    Read the article

  • Oracle eAM Webcast Series Announced (May-Dec 2010)

    - by [email protected]
    A series of free webinars with ReliabilityWeb will present key product capabilitiesof Oracle eAM and how they support maintenance and reliability best practices. Through this web-seminar series,companies can understand how to achieve better ROI. ReliabilityWeb will be using this as a key component of their initiative tobuild a stronger Oracle community.  For Oracle this program demonstrates leadership and commitment to the Maintenance SystemsMarketplace. Topics: (note all times are EAST)1. How can Oracle eAM enhance and support your reliability program? (May 13,2010) (1-2PM - all times East)) 2. Upgrading to Oracle eAM R12  - What's the value, when's the right time,what's involved and how do you get there? (June 17, 2010) (1-2PM) 3. Improving maintenance and reliability by aligning people, processes andsystems. (July 15, 2010) (1-2PM) 4. Using Oracle eAM to drive your Condition Based Maintenance program. (July29, 2010) (1-2PM) 5. Why and how do you get the power of Oracle eAM out to the people that arereally doing maintenance the technicians. (August 12, 2010) (1-2PM) 6. Standardizing and streamlining your maintenance work with Oracle eAM.(September 16, 2010 (1-2PM) 7. Standardizing maintenance and reliability data - How do you get there?(October 21, 2010 (1-2PM) 8. Using Oracle eAM to establish a Failure Reporting and Corrective ActionSystems (FRACAS). (November 18, 2010) (1-2PM)9. Maintenance Work Scheduling in Oracle eAM - Capabilities and Limitations(December 16, 2010) (1-2PM)to Register:   <http://img.gotomeeting.com/g2mimages/1x1.gif> <http://www1.gotomeeting.com/g2w/images/298420256/73664767535782300/embed.jpg>For additional information contact Jay West, EAM Master,+1.205.515.4326            

    Read the article

  • Want to book hotel stay using Bitcoin? Book using Expedia.com

    - by Gopinath
    The online travel booking leader Expedia announced that it started accepting Bitcoins for booking hotels on its website. For those who are new to Bitcoin, it is a digital currency in which transactions can be performed without the need for a central bank – its more like internet of currency. At the moment Expedia is accepting Bitcoin payments only for hotel bookings and in the future it may allow flights and vacation packages bookings. When Expedia customers wants to pay for a hotel using Bitcoin then they are transferred to Coinbase, a third party bit coin processor, for accepting the payments and then they are redirected back to Expedia.com to complete booking. This simple process would definitely drive mainstream adoption of Bitcoin – a win-win situation for digital currency users as well as for the travel company as they save a lot on card processing fees. Online retailers pay around 3% of transaction amount as fees for credit card processing companies like Visa & Master when they accept cards, but Coinbase charges a fee of just 1 percent for processing Bitcoins. Irrespective of customers adoption to Bitcoin based payments on Expedia as well as the savings on transaction fees, this move would give bragging rights to Expedia being the first e-commerce giant to accept digital currency! Image credit: Jonathan Caves

    Read the article

  • Inspiring a co-worker to adopt better coding practices?

    - by Aaronaught
    In the Handling my antiquated coworker question, various people discussed strategies for dealing with coworkers who are unwilling to integrate their workflow with the team's. I'd like, if possible, to learn some strategies for "teaching" a coworker who is merely ignorant of modern techniques and tools, and possibly a little apathetic. I've started working with a programmer who until recently has been working in relative isolation, in a different part of the company. He has extensive domain knowledge and most importantly he has demonstrated good problem-solving skills, something which many candidates seem to lack. However, the actual (C#) code I've seen is a throwback to the VB6 days. Procedural structure, Hungarian notation, global variables (abuse of static), no interfaces, no tests, non-use of Generics, throwing System.Exception... you get the idea. This programmer is a fair bit older than I am and, by first impressions at least, doesn't actively seek positive change. I'm not going to say resistant to change, because I think that is largely an issue of how the topic gets broached, and I want to be prepared. Programmers tend to be stubborn people, and going in with guns blazing and instituting rip-it-to-shreds code reviews and strictly-enforced policies is very likely not going to produce the end result that I want. If this were a new hire, a junior programmer, I wouldn't think twice about taking a "mentor" stance, but I'm extremely wary of treating an experienced employee as a clueless newbie (which he's not - he just hasn't kept pace with certain advancements in the field). How might I go about raising this developer's code quality standard the Dale Carnegie way, through gentle persuasion and non-material incentives? What would be the best strategy for effecting subtle, gradual changes, without creating an adversarial situation? Have other people - especially lead developers - been in this type of situation before? Which strategies were successful at stimulating interest and creating a positive group dynamic? Which strategies weren't successful and would be better to avoid? Clarifications: I really feel that several people are answering based on personal feelings without actually reading all of the details of the question. Please note the following, which should have been implied but I am now making explicit: This coworker is only my "senior" by virtue of age. I never said that his title, sphere of influence, or years at the organization exceed mine, and in fact, none of those things are true. He's a LOB programmer who's been absorbed into the main development shop. That's it. I am not a new hire, junior programmer, or other naïve idiot with grand plans to transform the company overnight. I am basically in charge of the software process, but as many who've worked as "leads" will know, responsibilities don't always correlate precisely with the org chart. I'm not asking people how to get my way, come hell or high water. I could do that if I wanted to, with the net result being that this person would become resentful and/or quit. Please try to understand that I am looking for a social, cooperative method of driving change. The mention of "...global variables... no tests... throwing System.Exception" was intended to demonstrate that the problems are not just superficial or aesthetic. Practices that may work for relatively small CRUD apps do not necessarily work for large enterprise apps, and in fact, none of the code so far has actually passed the integration tests. Please, try to take the question at face value, accept that I actually know what I'm talking about, and either answer the question that I actually asked or move on. P.S. My sincerest gratitude to those who -did- offer constructive advice rather than arguing with the premise. I'm going to leave this open for a while longer as I'm hoping to hear more in the way of real-world experiences.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory - webcast follow up

    - by Darin Pendergraft
    Thanks to all of the guest speakers on our Sun2Oracle webcast: Steve from Hub City Media, Albert from UCLA and our own Scott Bonell. During the webcast, we tried to answer as many questions as we could, but there were a few that we needed a bit more time to answer.  Albert from UCLA sent me the following information: Alternate Directory EvaluationWe were happy with Sun DSEE. OUD, based on the research we had done, was a logical continuation of DSEE.  If we moved away, it was to to go open source. UCLA evaluated OpenLDAP, OpenDS, Red Hat's 389 Directory. We also briefly entertained Active Directory. Ultimately, we decided to stay with OUD for the Enterprise Directory, and adopt OpenLDAP for the non-critical edge directories.HardwareFor Enterprise Directory, UCLA runs 3 Dell PowerEdge R710 servers. Each server has 12GB RAM and 2 2.4GHz Intel Xeon E5 645 processors. We run 2 of those servers at UCLA's Data Center in a semi active-passive configuration. The 3rd server is located at UCLA Berkeley. All three are multi master replicated. At run time, the bulk of LDAP query requests go to 1 server. Essentially, all of our authn/authz traffic is being handled by 1 server, with the other 2 acting as redundant back ups.

    Read the article

  • links for 2011-01-07

    - by Bob Rhubart
    Enterprise Software Development with Java: GlassFish 3 vs. JBoss 6 - Is the Web Profile ready for the Enterprise? (tags: ping.fm) Bay Area Coherence Special Interest Group (BACSIG) Jan 20 The Jan 20 meeting of the Bay Area Coherence Special Interest Group (BACSIG) features presentations by Rob Lee (Coherence 3.6 Clustering Features), Rao Bhethanabotla (Efficient Management and Update of Coherence Clusters to Reduce Down Time), and Christer Fahlgren (How To Build a Coherence Practice). (tags: oracle otn coherence sig) Michael T. Dinh: VirtualBox Command Line "I have manually configured VirtualBox Host-Only Ethernet Adapter for static IP. However, the IP can change after reboot which affects connectivity with the Guest with static IP." - Michael T. Dinh (tags: oracle virtualization virtualbox) Michel Schildmeijer: Oracle WebLogic - Configuring DyeInjection Monitor "A fairly unknown tool within WLDF (WebLogic Diagnostic Framework) is the DyeInjection Monitor. With this monitor configured one can track a  user or client address within a WebLogic system." - Michel Schildmeijer (tags: oracle weblogic) David Butler: Master Data Management Implementation Styles "Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach." - David Butler (tags: oracle otn softwarearchitecture) Kenneth Downs: Can You Really Create A Business Logic Layer? "Don't be afraid to use the database for what it is good for, and leave the arguments about "where everything belongs" to those with too much time on their hands." - Kenneth Downs (tags: businesslogic database softwarearchitecture) IASA Perspectives Magazine - Fall 2010 Fall 2010 edition of International Association of Software Architects (IASA) Perspectives magazine: (tags: softwarearchitecture iasi entarch) Using the DB Adapter in Oracle SOA Suite: returning status information "In this tutorial I will show you an example of how how can implement this within the Oracle SOA Suite (and because the DB Adapter can also be used within the Oracle Service Bus, the principles also apply to implementing it within the OSB)." - Henk Jan van Wijk (tags: oracle otn soa soasuite database) 4th International SOA Symposium + 3rd International Cloud Symposium by Thomas Erl - call for presentations (SOA Partner Community Blog) The International SOA and Cloud Symposium brings together lessons learned and emerging topics from SOA and Cloud projects, practitioners and experts. The two-day conference agenda will be organized into several tracks. (tags: oracle otn soa cloud)

    Read the article

  • Intel 82576 Network card

    - by No1_Melman
    I have an Intel dual port pcie NIC card with two 82576 interfaces according to ubuntu 12.04. I run the command sudo lshw -html > /home/melman/Documents/hardware.html and it shows both of the interfaces but they're grayed out?! How can enable them? ifconfig output: bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.100.2 Bcast:192.168.100.255 Mask:255.255.255.0 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr e0:69:95:d1:db:ff inet addr:192.168.10.63 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::e269:95ff:fed1:dbff/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2903 errors:0 dropped:0 overruns:0 frame:0 TX packets:2627 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1524738 (1.5 MB) TX bytes:430196 (430.1 KB) Interrupt:20 Memory:f7f00000-f7f20000 eth3 Link encap:Ethernet HWaddr 00:50:b6:50:a7:f9 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth4 Link encap:Ethernet HWaddr 00:1b:21:6e:99:77 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:f7c00000-f7c20000 eth5 Link encap:Ethernet HWaddr 00:1b:21:6e:99:76 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:f7c20000-f7c40000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:246 errors:0 dropped:0 overruns:0 frame:0 TX packets:246 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:17584 (17.5 KB) TX bytes:17584 (17.5 KB)

    Read the article

  • Job title inflation and fluffing

    - by Amir Rezaei
    When you work on the same project for a relative long time you get more experienced. You may also master many new technologies. Besides the coding you may also do what would classify other roles. There is however one part of your career that may not get updated. That is your job title. It seems beside all technological hypes there is also job title hype. It all depends on which company you work for. Many companies give employer better job titles because they want to keep them. The employee doesn’t change their job because the current title is much better, even if they would get better working condition and benefits if they changed their job. When you consider changing you job you notice that your job title is kind of “outdated”. People with less skill have a much better title for their job than you. You may very well explain what you did on your project but the fact is that many employers go by the title. So here are the questions: Do you change your current title in your CV? What are other options? Here are some good readings regarding these phenomena: Job title inflation Job title fluffing

    Read the article

  • C# Role Provider for multiple applications

    - by Juventus18
    I'm making a custom RoleProvider that I would like to use across multiple applications in the same application pool. For the administration of roles (create new role, add users to role, etc..) I would like to create a master application that I could login to and set the roles for each additional application. So for example, I might have AppA and AppB in my organization, and I need to make an application called AppRoleManager that can set roles for AppA and AppB. I am having troubles implementing my custom RoleProvider because it uses an initialize method that gets the application name from the config file, but I need the application name to be a variable (i.e. "AppA" or "AppB") and passed as a parameter. I thought about just implementing the required methods, and then also having additional methods that pass application name as a parameter, but that seems clunky. i.e. public override CreateRole(string roleName) { //uses the ApplicationName property of this, which is set in web.config //creates role in db } public CreateRole(string ApplicationName, string roleName) { //creates role in db with specified params. } Also, I would prefer if people were prevented from calling CreateRole(string roleName) because the current instance of the class might have a different applicationName value than intended (what should i do here? throw NotImplementedException?). I tried just writing the class without inheriting RoleProvider. But it is required by the framework. Any general ideas on how to structure this project? I was thinking make a wrapper class that uses the role provider, and explicitly sets the application name before (and after) and calls to the provider something like this: static class RoleProviderWrapper { public static CreateRole(string pApplicationName, string pRoleName) { Roles.Provider.ApplicationName = pApplicationName; Roles.Provider.CreateRole(pRoleName); Roles.Provider.ApplicationName = "Generic"; } } is this my best-bet?

    Read the article

  • Modern techniques for spriting

    - by DevilWithin
    Hello, I would like to know the flow for making modern 2D game artwork. How are the assets made nowadays? Bitmap? Vector-based? Hand-drawn and painted? Drawn digitally? Modeled in 3D and exported to bitmaps? I would like some information on programs as well, for fine looking art. Why does Flash's vector art style look good in most games? How do I make equivalent graphics with external tools? Or equaly good and not vector-based, anyway. Any special hints for animating? An answer oriented towards a one-man-army indie developer with little experience but some artistic sense would be appreciated! Not a complete dummy with paint programs, but also not a master at all, just need efficient ways to achieve results. Thanks. NOTE: Pixel art is not the goal of this question, nothing related to direct pixel manipulation should be brought up here, but you're free to do exactly that :)

    Read the article

  • Asp.net tips and tricks

    - by ybbest
    Asp.net tips and tricks Here is a summary of articles I found very useful over the years while I am working on asp.net TRULY Understanding View state http://weblogs.asp.net/infinitiesloop/archive/2006/08/03/Truly-Understanding-Viewstate.aspx TRULY Understanding Dynamic Controls http://weblogs.asp.net/infinitiesloop/archive/2006/08/25/TRULY-Understanding-Dynamic-Controls-_2800_Part-1_2900_.aspx ASP.Net 2.0 – Master Pages: Tips, Tricks, and Traps http://odetocode.com/articles/450.aspx ASP.NET Tip – Use The Label Control Correctly http://haacked.com/archive/2007/02/15/asp.net_tip_-_use_the_label_control_correctly.aspx Asp.net httphandlers http://www.michaelflanakin.com/Articles/NET/NET1x/ImplementingHTTPHandlers/tabid/173/Default.aspx http://support.microsoft.com/default.aspx?scid=kb;EN-US;308001 http://msdn.microsoft.com/en-us/library/ms972974.aspx Asp.net ajax http://encosia.com/ ASP.NET 2.0 Tips, Tricks, Recipes and Gotchas http://weblogs.asp.net/scottgu/pages/ASP.NET-2.0-Tips_2C00_-Tricks_2C00_-Recipes-and-Gotchas.aspx Mastering Page-UserControl Communication http://www.codeproject.com/KB/user-controls/Page_UserControl.aspx Comparing Web Site Projects and Web Application Projects Web Deployment Projects .NET Radio Show http://www.dotnetrocks.com/ Herdingcode http://herdingcode.com/ Clean Code talk http://www.objectmentor.com/videos/video_index.html .NET Video Show http://www.dnrtv.com/ .Net User group http://chicagoalt.net/home http://exposureroom.com/members/RIAViewMirror.aspx/assets/ FAQ Why should you remove unnecessary C# using directives? http://stackoverflow.com/questions/136278/why-should-you-remove-unnecessary-c-using-directives http://stackoverflow.com/questions/2009471/what-is-the-benefit-of-removing-redundant-imports-in-vb-net-or-using-in-c-file http://codeclimber.net.nz/archive/2009/12/30/best-of-2009-the-5-most-popular-posts.aspx

    Read the article

  • Ralink RT3060 wireless device configuration on ubuntu 12.04

    - by Stephan
    concerning How do I get a Ralink RT3060 wireless card working? I'm running Ubuntu 12.04 with a 'LWPX07 Edimax EW-7711In 150M 1T1R WL PCI Card' which has a rt3060 chip. Out of the box the card is recognized as rt2800sta. I tried solution one, that didn't work. Still the card connects to the wireless network, but it seems to slow to load any pages. Then I tried solution 2, but then the network-manager doesn't see any wireless device. $ iwconfig lo no wireless extensions. ra0 Ralink STA Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 invalid crypt:0 invalid misc:0 eth0 no wireless extensions. $ lsmod Module Size Used by rt3562sta 882296 0 $ lspci -v 05:02.0 Network controller: Ralink corp. RT3060 Wireless 802.11n 1T/1R Subsystem: Edimax Computer Co. Device 7711 Flags: bus master, slow devsel, latency 64, IRQ 23 Memory at ff9f0000 (32-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: rt2860 Kernel modules: rt3562sta, rt2800pci Am I missing a configuration step? How do I tell the network card which driver to use? Thanks in advance Stephan I found the problem. As described in stevens blog http://steveswinsburg.wordpress.com/2011/03/12/how-to-install-a-d-link-dwa-525-wireless-network-card-in-ubuntu-10-04/ sudo su make && make install "You need to use ‘sudo su’ and not just ‘sudo’ so it creates the directories properly." That is the problem with the solution describe above.

    Read the article

  • What a web developer can learn [closed]

    - by knoxxs
    There are many things to learn in web development. You can easily find what are the most important thing that you need to learn if you want to be a webmaster. Answer to questions about how to become a web developer or a webmaster only contained limited items that someone need to master. (Some eg - a, b ) But the problem is that these resources are not complete. When I started learning web development i follow the same steps. But after learning the basic development I didn't know that I have learnt nothing, there are many more things to learn. I realized this by following blogs , Q&A sites. When I first downloaded the HTNL% Boilerplate, the issue that they have covered, some of them I haven't even heard about. I want you to just suggest what are the possible things, issues that someone can learn and why to learn. I know the answer is follow blogs and do your work you will learn with time, but with these platforms I could get some benefit out of other experiences. This question is not how to become a webmaster, but answer to this may also cover that too.

    Read the article

  • VirtualBox : increase hard disk size of the virtual machine

    - by wim
    I have run out of space on my WinXP virtual machine, which I only gave 10 GB space for when I created it. Is there an easy way to increase it to, say, 20 GB? I can't see any obvious option in VirtualBox settings. edit: the suggestion below gives this error wim@wim-ubuntu:/media/data/winxp_vm$ VBoxManage modifyhd wim.vdi --resize 20000 VBoxManage: error: Cannot register the hard disk '/media/data/winxp_vm/wim.vdi' {46284957-2c09-4e70-8a49-bfbe0f7f681d} because a hard disk '/home/wim/VirtualBox VMs/winxp_vm/wim.vdi' with UUID {46284957-2c09-4e70-8a49-bfbe0f7f681d} already exists VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component VirtualBox, interface IVirtualBox, callee nsISupports Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, AccessMode_ReadWrite, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 210 of file VBoxManageDisk.cpp edit2: removing the .vdi from VirtualBox before calling VBoxManage command, then adding it back in, was successful. But now I can't boot the virtual machine, I get this worrying screen: By the way, it says FATAL: Could not read from the boot medium! System halted. edit3: The vdi must be reattached to the VM after VBoxManage command. Further, the partition will need to be resized from WITHIN windows, because you will have this empty space: I was able to resize the partition easily using a bit of freeware called EASEUS Partition Master 9.1.0 Home Edition.

    Read the article

  • Understanding branching strategy/workflow correctly

    - by burnersk
    I'm using svn without branches (trunk-only) for a very long time at my workplace. I had discovered most or all of the issues related to projects which do not have any branching strategy. Unlikely this is not going to change at my workplace but for my private projects. For my private projects which most includes coworkers and working together at the same time on different features I like to have an robust branching strategy with supports long-term releases powered by git. I find out that the Atlassian Toolchain (JIRA, Stash and Bamboo) helped me most and it also recommending me an branching strategy which I like to verify for the team needs. The branching strategy was taken directly from Atlassian Stash recommendation with a small modification to the hotfix branch tree. All hotfixes should also merged into mainline. The branching strategy in words mainline (also known as master with git or trunk with svn) contains the "state of the art" developing release. Everything here was successfully checked with various automated tests (through Bamboo) and looks like everything is working. It is not proven as working because of possible missing tests. It is ready to use but not recommended for production. feature covers all new features which are not completely finished. Once a feature is finished it will be merged into mainline. Sample branch: feature/ISSUE-2-A-nice-Feature bugfix fixes non-critical bugs which can wait for the next normal release. Sample branch: bugfix/ISSUE-1-Some-typos production owns the latest release. hotfix fixes critical bugs which have to be release urgent to mainline, production and all affected long-term *release*es. Sample branch: hotfix/ISSUE-3-Check-your-math release is for long-term maintenance. Sample branches: release/1.0, release/1.1 release/1.0-rc1 I am not an expert so please provide me feedback. Which problems might appear? Which parts are missing or slowing down the productivity?

    Read the article

  • How-To Backup, Swap, and Update Your Wii Game Saves

    - by Jason Fitzpatrick
    Whether you want to backup your game saves because you’ve worked so hard on them or you want to import game saves precisely so you don’t have to work so hard, we’ve got you covered. Image adapted from icon set by GasClown. There are a multitude of reasons you might want to export and import game saves from your Wii including: saving the progress on your favorite games before sending in your Wii for service, copying the progress to a friend’s or your secondary Wii, and importing saved games from the web or your friend’s Wii so that you don’t have to bust your ass to unlock all the specialty items yourself. (Here’s looking at you Mario Kart and House of the Dead: Overkill.) Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • The How-To Geek Video Guide to Using Windows 7 Speech Recognition

    - by YatriTrivedi
    Ever get the desire to control your computer, Star Trek-style? With Windows 7’s Speech Recognition, it’s easier than you might think. Microsoft has been working on its voice command steadily over the years. XP introduced it, Vista smoothed it, and 7 has it polished. It’s strangely not advertised as a feature, even though other voice command and speech recognition programs are hundreds of dollars. It may not be as perfect as some of them, but there’s definitely something amazing about vocally telling your computer to do things and it actually working Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Big AdventureWorks2012

    - by jamiet
    Last week I launched AdventureWorks on Azure, an initiative to make SQL Azure accessible to anyone, in my blog post AdventureWorks2012 now available for all on SQL Azure. Since then I think its fair to say that the reaction has been lukewarm with 31 insertions into the [dbo].[SqlFamily] table and only 8 donations via PayPal to support it; on the other hand those 8 donators have been incredibly generous and we nearly have enough in the bank to cover a full year’s worth of availability. It was always my intention to try and make this offering more appealing and to that end I have used an adapted version of Adam Machanic’s make_big_adventure.sql script to massively increase the amount of data in the database and give the community more scope to really push SQL Azure and see what it is capable of. There are now two new tables in the database: [dbo].[bigProduct] with 25200 rows [dbo].[bigTransactionHistory] with 7827579 rows The credentials to login and use AdventureWorks on Azure are as they were before: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Remember, if you want to support AdventureWorks on Azure simply click here to launch a pre-populated PayPal Send Money form - all you have to do is login, fill in an amount, and click Send. We need more donations to keep this up and running so if you think this is useful and worth supporting, please please donate.   I mentioned that I had to adapt Adam’s script, the main reasons being: Cross-database queries are not yet supported in SQL Azure so I had to create a local copy of [dbo].[spt_values] rather than reference that in [master] SELECT…INTO is not supported in SQL Azure The 1GB limit of SQLAzure web edition meant that there would not be enough space to store all the data generated by Adam’s script so I had to decrease the total number of rows. The amended script is available on my SkyDrive at https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16756&parid=550F681DAD532637!16755 @Jamiet

    Read the article

  • Oracle Enterprise Data Quality Adds Global Address Verification Capabilities for Greater Accuracy and Broader Location Coverage

    - by Mala Narasimharajan
    Data quality – has many flavors to it.  Product, Customer – you name the data domain and there’s data quality associated with it.  Address verification and data quality are a little different.  in that there is a tremendous amount of variation as well as nuance attached to it.  Specifically, what makes address verification challenging is that more often than not, addresses are incomplete, riddled with misspellings, incorrect postal codes are assigned to locations or non-address items are present.  Almost all data has locations, and accurate locations power a wealth of business processes: Customer Relationship Management, data quality, delivery of materials, goods or services, fraud detection, insurance risk assessment, data analytics, store and territory planning, and much more. Oracle Address Verification Server provides location-based services as well as deeper parsing and analysis capabilities for Oracle Enterprise Data Quality.  Specifically, Pre-integrated with the EDQ platform, Oracle Address Verification Server provides robust parsing, validation, as well as specialized location information for over 240 countries – all populated countries on Earth.  Oracle Enterprise Data Quality (EDQ) is a data quality platform, dedicated to address the distinct challenges of customer and product data quality, and performs advanced data profiling to identify and measure poor quality data and identify rule requirements, as well as semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured.   EDQ is integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM.  Address Verification Server provides key address verification services for Oracle CRM and Oracle Customer Hub.  In addition, Address Verification Server provides greater accuracy when handling address data due to its expanded sources and extensible knowledge repository, solid parsing across locales and countries as well as  adept handling of extraneous data in address fields.  For more information on Oracle Address Verification Server visit:  http://bit.ly/GMUE4H and http://bit.ly/GWf7U6

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >