Search Results

Search found 20217 results on 809 pages for 'custom tag'.

Page 630/809 | < Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >

  • [MINI HOW-TO] Redeem Pre-paid Zune Card Points for Zune Marketplace Media

    - by Mysticgeek
    If you don’t want to pay the monthly fee for a Zune Pass, one option is buying a pre-paid Zune card. Here we take a look at how to redeem the Zune card points so you can get music for your Zune or Zune HD. Of course the first thing you will need to do is buy a Zune card. You can find them for different amounts at most retail locations that sell Zune’s like Walmart, Best Buy…etc. When you purchase the card make sure the cashier activates it.   Now open up your Zune desktop software and sign in if you aren’t already. Go into Settings \ Account and under Microsoft Points click on Redeem Code. Now enter the code from the back of the card that you scratch off and hit Next. After entering in your code successfully it asks for your contact information, which seems odd considering you’re using a prepaid card. You may want to enter in a fictitious address and phone number if concerned about privacy…then click Next. The only thing you might want to enter in legitimately is your email address to get a confirmation email. You’re given a Thank you message… And back in your Account Settings you’ll see the points have been added. Now you can go shopping for music, videos, TV shows, and more at the Zune Marketplace. If you don’t want to give up your credit card info and pay the monthly fee for the Zune Pass, using prepaid card to purchase music as you go is a good alternative. Similar Articles Productive Geek Tips Update Your Zune Player SoftwareUnofficial Windows XP Themes Created by MicrosoftSweet Black Theme for Windows XPMake Windows XP Use a Custom Theme for the Classic Logon ScreenListen to Local FM Radio in Windows 7 Media Center TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab The Growth of Citibank

    Read the article

  • Analysis Services Tabular books #ssas #tabular

    - by Marco Russo (SQLBI)
    Many people are looking for books about Analysis Services Tabular. Today there are two books available and they complement each other: Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model by Marco Russo, Alberto Ferrari and Chris Webb Applied Microsoft SQL Server 2012 Analysis Services: Tabular Modeling by Teo Lachev The book I wrote with Alberto and Chris is a complete guide to create tabular models and has a good coverage about DAX, including how to use it for enriching a semantic model with calculated columns and measures and how to use it for querying a Tabular model. In my experience, DAX as a query language is a very interesting option for custom analytical applications that requires a fast calculation engine, or simply for standard reports running in Reporting Services and accessing a Tabular model. You can freely preview the table of content and read some excerpts from the book on Safari Books Online. The book is in printing and should be shipped within mid-July, so finally it will be very soon on the shelf of all the people already preordered it! The Teo Lachev’s book, covers the full spectrum of Tabular models provided by Microsoft: starting with self-service BI, you have users creating a model with PowerPivot for Excel, publishing it to PowerPivot for SharePoint and exploring data by using Power View; then, the PowerPivot for Excel model can be imported in a Tabular model and published in Analysis Services, adding more control on the model through row-level security and partitioning, for example. Teo’s book follows a step-by-step approach describing each feature that is very good for a beginner that is new to PowerPivot and/or to BISM Tabular. If you need to get the big picture and to start using the products that are part of the new Microsoft wave of BI products, the Teo’s book is for you. After you read the book from Teo, or if you already have a certain confidence with PowerPivot or BISM Tabular and you want to go deeper about internals, best practices, design patterns in just BISM Tabular, then our book is a suggested read: it contains several chapters about DAX, includes discussions about new opportunities in data model design offered by Tabular models, and also provides examples of optimizations you can obtain in DAX and best practices in data modeling and queries. It might seem strange that an author write a review of a book that might seem to compete with his one, but in reality these two books complement each other and are not alternatives. If you have any doubt, buy both: you will be not disappointed! Moreover, Amazon usually offers you a deal to buy three books, including the Visualizing Data with Microsoft Power View, another good choice for getting all the details about Power View.

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • Introducing the New Face of Fusion Applications

    - by mvaughan
    By Misha Vaughan and Kathy Miedema, Oracle Applications User Experience At OpenWorld 2012, the Oracle Applications User Experience (UX) team unveiled the new face of Fusion Applications. You may have seen it in sessions presented by Chris Leone, Anthony Lye, Jeremy Ashley or others, or you may have gotten a look on the demogrounds. This screenshot shows the new Oracle Fusion Applications entry experience.Why are we delivering a new face for Fusion Applications? Because, says Ashley, the vice president of the Oracle Applications User Experience team, we want to provide a simple, modern, productive way for users to complete their top quick-entry tasks. The idea is to provide a clear, productive user experience that is backed by the full functionality of Fusion Applications. The first release of the new face of Fusion focuses on three types of users. It provides a fully functional gateway to Fusion Applications for: New and casual users who need quick access to self-service tasks Professional users who need fast access to quick-entry, high-volume tasks Users who are looking for a way to quickly brand their portal for employees The new face of Fusion allows users to move easily from navigation to action, Ashley said, and it has been designed for any device -- Mac, PC, iPad, Android, SmartBoard -- in the browser. The Oracle Fusion Applications Employee Directory. How did we build it? The new face of Fusion essentially is a custom shell, developed by the Apps UX team, and a set of page templates that embodies a simple design aesthetic. It’s repeatable, providing consistency across its pages, and requires little to zero training. More specifically, the new face of Fusion has been built on ADF. The Applications UX team created pages in JDeveloper using local tasks flows bound to existing view objects. Three new components were commissioned from ADF, and existing Fusion components were re-skinned to deliver a simple, modern user experience. It really is that simple – and to prove that point, we’ve been sharing our story around the new face of Fusion on several Oracle channels such as this one. Want to know more? Check the VoX blog for our favorite highlights from OpenWorld, which included demos of the new face of Fusion. And take a look at these posts from Ace Directors Debra Lilley, and Floyd Teter. Special mention to Floyd for the first screen shot credit. Also a nod to Wilfred vander Deijl for capturing the demo to share as part 1 and part 2. We will also be hitting upcoming user group conferences with our demos, and you can always reach out to one of our Fusion User Experience Advocates for a look.

    Read the article

  • How to replace the SharePoint date calendar control with more user friendly jQuery calendar control

    - by ybbest
    When you use the SharePoint date and time type for date of birth field, you will notice that the calendar control is extremely non-user-friendly. You can only navigate month by month as shown below. To resolve the issue, you can customize the list form page using SharePoint designer and replace the OOB calendar control with popular jQuery control. The solution works for both SharePoint 2010,2013 and office365. Here are the steps for how to achieve this. 1. Open SharePoint designer and create a New List Form called customNew and set as default form for the selected type. 2. Open style library in file explorer and copy jQuery and jQuery UI files into the style library in SharePoint site. You can download the jQuery and jQuery UI from the web and the content of the contactPersonCustomNewForm.js is as below. I use the dd/mm/yy format as my locale in Regional Settings is English(New Zealand). You need to change this if you live in another country with different date format $(document).ready(function() { $("img#ctl00_m_g_540b9a50_52dc_4400_a58d_1db99555fddf_ff41_ctl00_ctl00_DateTimeField_DateTimeFieldDateDatePickerImage").parent().hide(); $("img#ctl00_m_g_540b9a50_52dc_4400_a58d_1db99555fddf_ff41_ctl00_ctl00_DateTimeField_DateTimeFieldDateDatePickerImage").hide(); $("input#ctl00_m_g_540b9a50_52dc_4400_a58d_1db99555fddf_ff41_ctl00_ctl00_DateTimeField_DateTimeFieldDate").datepicker({ changeMonth:true, changeYear:true, showOn: "button", buttonImage: "/_layouts/images/calendar.gif", buttonImageOnly: true, defaultDate:"01/01/1970", yearRange: "c-20:c+20", dateFormat: "dd/mm/yy" }); }); In order to get the image and textbox selector above , you can open IE developer toolbar(click F12) and find the control ID as below: 3. Open SharePoint designer and edit the newly created New List Form customNew.aspx in advance mode. Then copy and paste the following links in the PlaceHolderAdditionalPageHead. <SharePoint:CssRegistration name="<%$SPUrl:~SiteCollection/Style Library/themes/ui-lightness/jquery-ui.css%>" runat="server"/> <SharePoint:ScriptLink language="javascript" name="~sitecollection/Style Library/jquery-1.10.2.js" Defer="false" runat="server"/> <SharePoint:ScriptLink language="javascript" name="~sitecollection/Style Library/jquery-ui-1.10.4.custom.min.js" Defer="false" runat="server"/> <SharePoint:ScriptLink language="javascript" name="~sitecollection/Style Library/contactPersonCustomNewForm.js" Defer="false" runat="server"/>   4. Now go to the list and click add, you will see the new calendar control as shown below

    Read the article

  • Best Platform/Engine for turn based Client/Server Android game

    - by Paradine
    I'm currently designing a turn based game for tablets. Initially for Android with porting to iOS later considered in design. I'm having trouble narrowing down the available technologies to even know where to spend my research time. I am hoping that if I explain what I am trying to achieve someone may be able to suggest a platform and/or engine. I've looked into some of the open source Engines ( http://www.cuteandroid.com/ten-open-source-android-2d-or-3d-game-engine-for-android-developers ) and some appear to handle much of what I might require - although with a higher focus on graphics than i need. Mages looks interesting although development appears to have ceased. If I could somehow leverage GoogleApps that would be excellent. Here is what I am trying to achieve: PvP turn based strategy game over internet - minimal animation and bandwidth required Players match up online using MetaGame system MatchID created on Resolution Server and Game starts Clients have 30 second countdown to select MoveString Clients sends small secure timestamped and MatchIDed MoveString to Resolution server Resolution server looks up Move String for each player, Resolves and Updates Players status in MatchID on Server Resolution server updates Client Views Repeat until victory conditions met - MatchID Closed, Rewards earned in MetaGame There will also need to be a full social and account system and metagame backend - but this could be running on separate system(s) Tablet in Offline mode would be catalog browsing and perhaps single player AI - bum I'm focusing on the Resolution Server at this point I'm not even certain if I would be looking at an Android App or a WebApp at this stage! I want a custom GUI so I guess an app - but maybe as I have little animation a WebApp might also work. Probably some combination of both. There will be very small overhead in data between client server - essentially a small text string every 30 seconds sent to the Resolution server which looks up the Effect and applies it to the Opponents string and determines some results to apply to the match. The client view is updated minimally with the results (only 5 in game Integers tracked) - perhaps triggering small animations/popups on the client to show the end result. e.g Explosion. If you have suggestions for a good technology or platform to best achieving the Resolution Server I'd love to hear. Also if you have experience with open source Engines - and could narrow down which (if any ) might be most suitable that would be a big help. Thanks in advance

    Read the article

  • At the Java DEMOgrounds - Oracle’s Java Embedded Suite 7.0

    - by Janice J. Heiss
    The Java Embedded Suite 7.0, a new, packaged offering that facilitates the creation of  applications across a wide range of  embedded systems including network appliances, healthcare devices, home gateways, and routers was demonstrated by Oleg Kostukovsky of  Oracle’s Java Embedded Global Business Unit. He presented a device-to-cloud application that relied upon a scan station connected to Java Demos throughout JavaOne. This application allows an NFC tag distributed on a handout given to attendees to be scanned to gather various kinds of data. “A raffle allows attendees to check in at six unique demos and qualify for a prize,” explained Kostukovsky. “At the same time, we are collecting data both from NFC tags and sensors. We have a sensor attached to the back of the skin page that collects temperature, humidity, light intensity, and motion data at each pod. So, all of this data is collected using an application running on a small device behind the scan station."“Analytics are performed on the network using Java Embedded Suite and technology from Oracle partners, SeeControl, Hitachi, and Globalscale,” Kostukovsky said. Next, he showed me a data visualization web site showing sensory, environmental, and scan data that is collected on the device and pushed into the cloud. The Oracle product that enabled all of this, Java Embedded Suite 7.0, was announced in late September. “You can see all kinds of data coming from the stations in real-time -- temperature, power consumption, light intensity and humidity,” explained Kostukovsky. “We can identify trends and look at sensory data and see all the trends of all the components. It uses a Java application written by a partner, SeeControl. So we are using a Java app server and web server and a database.” The Market for Java Embedded Suite 7.0 “It's mainly geared to mission-to-mission applications because the overall architecture applies across multiple industries – telematics, transportation, industrial automation, smart metering, etc. This architecture is one in which the network connects to sensory devices and then pre-analyzes the data from these devices, after which it pushes the data to the cloud for processing and visualization. So we are targeting all those industries with those combined solutions. There is a strong interest from Telcos, from carriers, who are now moving more and more to the space of providing full services for their interim applications. They are looking to deploy solutions that will provide a full service to those who are building M-to-M applications.”

    Read the article

  • Event Processed

    - by Antony Reynolds
    Installing Oracle Event Processing 11g Earlier this month I was involved in organizing the Monument Family History Day.  It was certainly a complex event, with dozens of presenters, guides and 100s of visitors.  So with that experience of a complex event under my belt I decided to refresh my acquaintance with Oracle Event Processing (CEP). CEP has a developer side based on Eclipse and a runtime environment. Developer Install The developer install requires several steps (documentation) Download required software Eclipse  (Linux) – It is recommended to use version 3.6.2 (Helios) Install Eclipse Unzip the download into the desired directory Start Eclipse Add Oracle CEP Repository in Eclipse http://download.oracle.com/technology/software/cep-ide/11/ Install Oracle CEP Tools for Eclipse 3.6 You may need to set the proxy if behind a firewall. Modify eclipse.ini If using Windows edit with wordpad rather than notepad Point to 1.6 JVM Insert following lines before –vmargs -vm \PATH_TO_1.6_JDK\jre\bin\javaw.exe Increase PermGen Memory Insert following line at end of file -XX:MaxPermSize=256M Restart eclipse and verify that everything is installed as expected. Server install The server install is very straightforward (documentation).  It is recommended to use the JRockit JDK with CEP so the steps to set up a working CEP server environment are: Download required software JRockit – I used Oracle “JRockit 6 - R28.2.5” which includes “JRockit Mission Control 4.1” and “JRockit Real Time 4.1”. Oracle Event Processor – I used “Complex Event Processing Release 11gR1 (11.1.1.6.0)” Install JRockit Run the JRockit installer, the download is an executable binary that just needs to be marked as executable. Install CEP Unzip the downloaded file Run the CEP installer,  the unzipped file is an executable binary that may need to be marked as executable. Choose a custom install and add the examples if needed. It is not recommended to add the examples to a production environment but they can be helpful in development. Voila The Deed Is Done With CEP installed you are now ready to start a server, if you didn’t install the demoes then you will need to create a domain before starting the server. Once the server is up and running (using startwlevs.sh) you can verify that the visualizer is available on http://hostname:port/wlevs, the default port for the demo domain is 9002. With the server running you can test the IDE by creating a new “Oracle CEP Application Project” and creating a new target environment pointing at your CEP installation. Much easier than organizing a Family History Day!

    Read the article

  • Subscribe to RSS Feeds in Chrome with a Single Click

    - by Asian Angel
    Do you have a Google Reader account and need a quick simple way to subscribe to new RSS feeds while you browse? Then you will definitely want to have a look at the Chrome Reader extension for Chrome. Before If you want to add a new feed to your Google Reader account in Chrome then you have to do it manually. A single feed now and then is not a problem but if you are wanting to build a serious set of RSS feeds quickly then not so good. Chrome Reader in Action Once the extension is installed you are ready to go. Any time that you visit a webpage with an RSS feed available you will see the familiar orange feed icon appear in your “Address Bar”. To add the feed to your Google Reader account just click on the orange feed icon. Note: You will need to be logged into your Google Reader account in your browser. When you click on the orange feed icon a small drop-down window will appear where you can modify the feed name and/or add it to a “custom folder” if desired. Notice that the orange feed icon has changed to the familiar Google Reader icon indicating that the feed has been added to the account. Now you are ready to continue browsing…no other actions are required. And now to subscribe to the Microsoft feed at Ars Technica. Once again a single click and all done. Refreshing our Google Reader page shows both of our new RSS feeds ready to enjoy. Conclusion The Chrome Reader extension makes it as simple as can be to add new RSS feeds to your Google Reader account while browsing with Chrome. Links Download the Chrome Reader extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Access Your favorite RSS Feeds in Windows Media CenterChange Default Feed Reader in FirefoxUse Outlook 2007 as an RSS ReaderInstall Extensions in Google ChromeMake Outlook Stop Using Internet Explorer’s RSS Feeds TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Listen to Local FM Radio in Windows 7 Media Center

    - by DigitalGeekery
    If you have a supported tuner card and connected FM antenna, you can listen to your favorite local over-the-air FM stations in Windows 7 Media Center. Before the FM radio option will be available in Windows Media Center, you’ll need to have a TV or Radio tuner card installed and configured. If you have a TV tuner card installed, you may already have a Radio tuner as well. Many TV tuner cards also have built in FM tuners. Open Windows Media Center, scroll the “Music” and over to “Radio.” Click on “FM Radio.”   The radio will turn on and you’ll see the current station number listed in the white box. Just below are standard “Seek” and “Tune” buttons, as well as “Preset” options. Tuning works just like a typical FM radio. Click on the (-) or (+) buttons to “Tune” or “Seek” up and down the dial. If you already know the frequency of the station, enter the numbers using the numeric keypad on the remote control or keyboard. To save the current station you’re listening to as a preset, click on the “Save as Preset” button. Type in a custom name for your preset station and click “Save.”   Once you set your presets, they will also be available on the main FM Radio screen. The transport controls at the bottom of the screen also allow you to control Volume, Pause, Play, Skip back, and Skip forward. Fast Forward and Rewind, however, are not supported.   This is a nice option if you’d like to listen to your local FM favorites on your computer, especially if those stations aren’t available online. If you don’t have an FM tuner and want to listen to thousands of online radio streams, check out our article on RadioTime in WMC. Similar Articles Productive Geek Tips Listen to Over 100,000 Radio Stations in Windows Media CenterListen To XM Radio with Windows Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Schedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • ASP.Net 4.5 Garbage Collection Improvement

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/06/24/asp.net-4.5-garbage-collection-improvement.aspxI just read Five Great .NET Framework 4.5 Features on CodeProject by Shivprasad koirala. Feature 5 in his article mentions the GC background cleanup and has a good explanation of the work the GC has to do for ASP.Net on the server. “Garbage collector is one real heavy task in a .NET application. And it becomes heavier when it is an ASP.NET application. ASP.NET applications run on the server and a lot of clients send requests to the server thus creating loads of objects, making the GC really work hard for cleaning up unwanted objects.” “To overcome the above problem, server GC was introduced. In server GC there is one more thread created which runs in the background. This thread works in the background and keeps cleaning…objects thus minimizing the load on the main GC thread. Due to double GC threads running, the main application threads are less suspended, thus increasing application throughput. To enable server GC, we need to use the gcServer XML tag and enable it to true.” <configuration> <runtime> <gcServer enabled="true"/> </runtime> </configuration> This is not done by default. The MSDN information page says “There are only two garbage collection options, workstation or server. For single-processor computers, the default workstation garbage collection should be the fastest option. Either workstation or server can be used for two-processor computers. Server garbage collection should be the fastest option for more than two processors. Use the GCSettingsIsServerGC property to determine if server garbage collection is enabled.” “In the .NET Framework 4 and earlier versions, concurrent garbage collection is not available when server garbage collection is enabled. Starting with the .NET Framework 4.5, server garbage collection is concurrent. To use non-concurrent server garbage collection, set the <gcServer> element to true and the <gcConcurrent> element to false. “ So if you’re using ASP.Net 4.5 and have a multi-core server, you should try turning on the Server Garbage Collection and do some profiling to see if it improves the performance of your site.

    Read the article

  • Should I choose Doctrine 2 or Propel 1.5/1.6, and why?

    - by Billy ONeal
    I'd like to hear from those who have used Doctrine 2 (or later) and Propel 1.5 (or later). Most comparisons between these two object relational mappers are based on old versions -- Doctrine 1 versus Propel 1.3/1.4, and both ORMs went through significant redesigns in their recent revisions. For example, most of the criticism of Propel seems to center around the "ModelName Peer" classes, which are deprecated in 1.5 in any case. Here's what I've accumulated so far (And I've tried to make this list as balanced as possible...): Propel Pros Extremely IDE friendly, because actual code is generated, instead of relying on PHP magic methods. This means IDE features like code completion are actually helpful. Fast (In terms of database usage -- no runtime introspection is done on the database) Clean migration between schema versions (at least in the 1.6 beta) Can generate PHP 5.3 models (i.e. namespaces) Easy to chain a lot of things into a single database query with things like useXxx methods. (See the "code completion" video above) Cons Requires an extra build step, namely building the model classes. Generated code needs rebuilt whenever Propel version is changed, a setting is changed, or the schema changes. This might be unintuitive to some and custom methods applied to the model are lost. (I think?) Some useful features (i.e. version behavior, schema migrations) are in beta status. Doctrine Pros More popular Doctrine Query Language can express potentially more complicated relationships between data than easily possible with Propel's ActiveRecord strategy. Easier to add reusable behaviors when compared with Propel. DocBlock based commenting for building the schema is embedded in the actual PHP instead of a separate XML file. Uses PHP 5.3 Namespaces everywhere Cons Requires learning an entirely new programming language (Doctrine Query Language) Implemented in terms of "magic methods" in several places, making IDE autocomplete worthless. Requires database introspection and thus is slightly slower than Propel by default; caching can remove this but the caching adds considerable complexity. Fewer behaviors are included in the core codebase. Several features Propel provides out of the box (such as Nested Set) are available only through extensions. Freakin' HUGE :) This I have gleaned though only through reading the documentation available for both tools -- I've not actually built anything yet. I'd like to hear from those who have used both tools though, to share their experience on pros/cons of each library, and what their recommendation is at this point :)

    Read the article

  • What are the options for hosting a small Plone site? [closed]

    - by Tina Russell
    Possible Duplicate: How to find web hosting that meets my requirements? I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • Rendering another screen on top of main game screen in fullscreen mode

    - by wolf
    my game runs in fullscreen mode and uses active rendering. The graphics are drawn on the fullscreen window in each game loop: public void render() { Window w = screen.getFullScreenWindow(); Graphics2D g = screen.getGraphics(); renderer.render(g, level, w.getWidth(), w.getHeight()); g.dispose(); screen.update(); } This is the screen.update() method: public void update(){ Window w = device.getFullScreenWindow(); if(w != null){ BufferStrategy s = w.getBufferStrategy(); if(!s.contentsLost()){ s.show(); } } } I want to display another screen on my main game screen (menu, inventory etc). Lets say I have a JPanel inventory, which has a grid of inventory cells (manually drawn) and some Swing components like JPopupMenu. So i tried adding that to my window and repainting it in the game loop, which worked okay most of the time... but sometimes the panel wouldn't get displayed. Blindly moving things around in the inventory worked, but it just didn't display. When i alt-tabbed out and back again, it displayed properly. I also tried drawing the rest of the inventory on my full screen window and using a JPanel to display only the buttons and popupmenus. The inventory displayed properly, but the Swing components keep flickering. I'm guessing this is because I don't know how to combine active and passive rendering. public void render() { Graphics2D g = screen.getGraphics(); invManager.render(g); g.dispose(); screen.update(); invPanel.repaint(); } Should i use something else instead of a JPanel? I don't really need active rendering for these screens, but I don't understand why they sometimes just don't display. Or maybe I should just make my own custom components instead of using Swing? I also read somewhere that using multiple panels/frames in a game is bad practice so should I draw everything on one window/frame/panel? If I CAN use JPanels for this, should I add and remove them every time the inventory is toggled? Or just change their visibility?

    Read the article

  • Virtual Developer Day: Oracle Fusion Development - December 11th - 10:00-14:00 CET

    - by Richard Lefebvre
    Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staff. Oracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF's integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications. Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect – this event has something for everyone. Don't miss this opportunity. December 11th, 2012 9:00 – 13:00 GMT 10:00 – 14:00 CET 12:00 – 16:00 AST 13:00 – 17:00 MSK 14:30 – 18:30 IST Register online now for this FREE event! Agenda 9:00 a.m. – 9:30 a.m. Opening 9:30 a.m. – 10:00 a.m. Keynote Oracle Fusion Development Track 1 Introduction to Fusion Development Track 2 What's New in Fusion Development Track 3 Fusion Development in the Enterprise Track 4 Hands On Lab - WebCenter Portal and ADF Lab w/ JDeveloper 10:00 a.m. – 11:00 a.m. Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development Lab materials can be found on event wiki here. Q&A about the lab is available throughout the event. 11:00 a.m. – 12:00 p.m. Rich Web UI made simple – an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 p.m. – 1:00 p.m. Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration View Session Abstracts We look forward to welcoming you at this free event!  

    Read the article

  • Branching and Merging with TortoiseSVN

    - by capgpilk
    For this example I am using Visual Studio 2010, TortoiseSVN 1.6.6, Subversion 1.6.6 and AnkhSVN 2.1.7819.411, so if you are using different versions, some of these screen shots may differ. This is assuming you have your code checked in to the trunk directory and have a standard SVN structure of trunk, branches and tags. There are a number of developers who prefer to develop solely in a branch and never touch the trunk, but the process is generally the same and you may be on a small team and prefer to work in the trunk and branch occasionally. There are three steps to successful branching. First you branch, then when you are ready you need to reintegrate any changes that other developers may have made to the trunk in to your branch. Then finally when your branch and the trunk are in sync, you merge it back in to the trunk. Branch Right click project root in Windows Explorer > TortoiseSVN > Branch/Tag Enter the branch label in the ‘To URL’ box. For example /branches/1.1 Choose Head revision Check Switch working copy Click OK Make any changes to branch Make any changes to trunk Commit any changes For this example I copied the project to another location prior to branching and made changes to that using Notepad++. Then committed it to SVN, as this directory is mapped to the trunk, that is what gets updated.   Merge Trunk with Branch Right click project root in Windows Explorer > TortoiseSVN > Merge Choose ‘Merge a range of revisions’ In ‘URL to merge from’ choose your trunk Click Next, then the ‘test merge’ button. This will highlight any conflicts. Here we have one conflict we will need to resolve because we made a change and checked in to trunk earlier Click merge. Now we have the opportunity to edit that conflict This will open up TortoiseMerge which will allow us to resolve the issue. In this case I want both changes. Perform an Update then Commit Reloading in Visual Studio shows we have all changes that have been made to both trunk and branch. Merge Branch with Trunk Switch working copy by right clicking project root in Windows Explorer > TortoiseSVN > switch Switch to the trunk then ok Right click project root in Windows Explorer > TortoiseSVN > merge Choose ‘Reintegrate a branch’ In ‘From URL’ choose your branch then next Click ‘Test merge’, this shouldn’t show any conflicts Click Merge Perform Update then Commit Open project in Visual Studio, we now have all changes. So there we have it we are connected back to the trunk and have all the updates merged.

    Read the article

  • XNA extending an existing Content type

    - by Maarten
    We are doing a game in XNA that reacts to music. We need to do some offline processing of the music data and therefore we need a custom type containing the Song and some additional data: // Project AudioGameLibrary namespace AudioGameLibrary { public class GameTrack { public Song Song; public string Extra; } } We've added a Content Pipeline extension: // Project GameTrackProcessor namespace GameTrackProcessor { [ContentSerializerRuntimeType("AudioGameLibrary.GameTrack, AudioGameLibrary")] public class GameTrackContent { public SongContent SongContent; public string Extra; } [ContentProcessor(DisplayName = "GameTrack Processor")] public class GameTrackProcessor : ContentProcessor<AudioContent, GameTrackContent> { public GameTrackProcessor(){} public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context), Extra = "Some extra data" // Here we can do our processing on 'input' }; } } } Both the Library and the Pipeline extension are added to the Game Solution and references are also added. When trying to use this extension to load "gametrack.mp3" we run into problems however: // Project AudioGame protected override void LoadContent() { AudioGameLibrary.GameTrack gameTrack = Content.Load<AudioGameLibrary.GameTrack>("gametrack"); MediaPlayer.Play(gameTrack.Song); } The error message: Error loading "gametrack". File contains Microsoft.Xna.Framework.Media.Song but trying to load as AudioGameLibrary.GameTrack. AudioGame contains references to both AudioGameLibrary and GameTrackProcessor. Are we maybe missing other references? EDIT Selecting the correct content processor helped, it loads the audio file correctly. However, when I try to process some data, e.g: public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { int count = input.Data.Count; // With this commented out it works fine return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context) }; } It crashes with the following error: Managed Debugging Assistant 'PInvokeStackImbalance' has detected a problem in 'C:\Users\Maarten\Documents\Visual Studio 2010\Projects\AudioGame\DebugPipeline\bin\Debug\DebugPipeline.exe'. Additional Information: A call to PInvoke function 'Microsoft.Xna.Framework.Content.Pipeline!Microsoft.Xna.Framework.Content.Pipeline.UnsafeNativeMethods+AudioHelper::OpenAudioFile' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature. Information from logger right before crash: Using "BuildContent" task from assembly "Microsoft.Xna.Framework.Content.Pipel ine, Version=4.0.0.0, Culture=neutral, PublicKeyToken=842cf8be1de50553". Task "BuildContent" Building gametrack.mp3 -> bin\x86\Debug\Content\gametrack.xnb Rebuilding because asset is new Importing gametrack.mp3 with Microsoft.Xna.Framework.Content.Pipeline.Mp3Imp orter Im experiencing exactly this: http://forums.create.msdn.com/forums/t/75996.aspx

    Read the article

  • debootstrap or virt-install Ubuntu Server Maverick fails

    - by poelinca
    Oki so running any kind of variation of debootsrap i get the following error I: Extracting zlib1g... W: Failure trying to run: chroot /lxc/iso/dodo mount -t proc proc /proc debootstrap.log : mount: permission denied if i manualy chroot into the directory then i get promted with: id: cannot find name for group ID 0 I have no name!@...# i tryed addgroup but it's not installed , apt-get/aptitude : command not found , so i can't do anything with it . I've tryed ubuntu-vm-builder but since it's calling debootstrap i get the same error . Played with it for a few days and then i stoped and gaved virt-install a try , everithing works till i get to the console to finish the install witch shows only : Escape character is ^] and nothing more , no matter what i type . So basicly what i'm trying to do is build a usable chroot system so i can use it with lxc or libvirt . What are my options to get containers/virtualisation up and running ? I've read somewhere that i can use openvz templates with lxc or libvirt ? but how ? Let me know if you need aditional info ( p.s. doing all this on a dedicated server so i can't access it by hand , only ssh , plus on my local pc running ubuntu desktop maverick everithing works ) . EDIT Getting closer , i managed to understand how to use an openvz template with lxc , now the problem comes with the network bridge lxc-start: invalid interface name: br0 # Use same bridge device used in your controlling host setup lxc-start: failed to process 'lxc.network.link = br0 # Use same bridge device used in your controlling host setup ' lxc-start: failed to read configuration file i followed the exact steps to create a bridge and lxc conf looks like: lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 # Use same bridge device used in your controlling host setup lxc.network.hwaddr = {a1:b2:c3:d4:e5:f6} # As appropiate (line only needed if you wish to dhcp later) lxc.network.ipv4 = {10.0.0.100} # (Use 0.0.0.0 if you wish to dhcp later) lxc.network.name = eth0 # could likely be whatever you want Since it's not working i know smth is wrong so could somebody guyde me ? EDIT , looks like the base install was using an custom kernel ( bzImage-2.6.34.6-xxxx-grs-ipv6-65 ) for witch you i didn't found the headers , i did a update-grub after i installed a new kernel , edited menu.lst and no it's using 2.6.35-23-server and now debootstrap is working just fine same as ubuntu-vm-builder .

    Read the article

  • Indie Software Developers - How do I handle taxes?

    - by Connor
    I apologize if this is the wrong site to post on, perhaps someone could point me to the proper place if it is not. Hello, I am 17 years old and currently develop applications/games for Android and iPhone as well as develop internet websites and code a variety of my own projects. I have been very fortunate and have made a large amount of money and continue to make money online to the point where I do not need a stable job, though I'd like to get one after college. I've never held a job anywhere, and have never had to pay taxes. I'm coming into a lot of issues and I am quite confused. I get money from MANY sources- 15 different advertisement networks(!), 4 different payment processors, 5 different affiliate networks and a variety of other sources. All of them pay to different places and at different times (checking account, PayPal, reloadable debit card, ect.) I essentially have a list in a Notepad with names and login information for each source. I have also created a PHP script that uses cURL to grab all the revenue from each service, add it all up, then text me every few hours so I can keep track. It's a mess, but it's working OK, and I can create custom reports (for IRS?). But enough of that, my questions are about taxes in the US, and how indie developers handle it all. I'm at slightly over $250k so far this year, with negligible earnings last year. I have it all stockpiled in a bank account and haven't touched it, I'm a bit scared to. What do I file as? A sole proprietor, a business, just a regular person? How can I handle all of the different revenue sources? (AdSense, CJ, LinkShare) So far none of them have sent me any paperwork on taxes and I've read that I'm supposed to pay taxes quarterly? Do I need paperwork from EACH source to file? Or can I just say I got $x total and that'd be it? What percentage do you pay of total earnings? Average? Should I create an LLC? A corporation? Or stay as a developer? What would be the cheapest options? Could I go to jail? I haven't touched the money except a few dollars to help my parents pay the mortgage once. Any insight would be great. My parents have no idea what I should do, both have no forms of higher education and both have no high school diploma's. They just live day by day with simple jobs. I appreciate any help or experience with this.

    Read the article

  • SharePoint 2010 Data Retrival Techinques

    - by Jayant Sharma
    In SharePoint, we have two options to perform CRUD operation.1. using server side code2. using client side codeusing server side code, we have 1. CAML2. LINQusing client side code, we have 1. Client Object Model    1.1.      Managed Client Object Model     1.2.     Silverlight Client Object Model    1.3.     ECMA Client Object Model2. SharePoint Web Services3. ADO Data Service (based on REST Web Services)4. Using RPC Call (owssvr.dll)Which and when these options are used depend upon requirements. Every options are certain advantages and disadvantages. So, before start development of any new sharepoint project, it is important to understand the limitations of different methods.Server Object Model is used when our application is host on the same server on which sharepoint is installed. while Client Side code is used to access sharepoint from client system. In SharePoint 2010 specially Client Object Model (COM) are introduced to perform the sharepoint operations from client system. Advantage of CAML:    -  It is fast.    -  Can be use it from all kind of technology like Silverlight, or Jquery    -  You can use U2U CAML Query builder to generate CAML Query.Disadvantage Of CAML:    - Error Prone, as we can detect the error only at runtimeAdvantage of LINQ:    -  Object Oriented technique (Object Relation Model)    -  LINQ  to SharePoint provider are working with Strongly Type List Item Objects, So intellisence are present at runtime    -  No need of knowledge of CAML    -  Less Error Prone as it as it uses C# syntex.    -  You can compare two Fields of SharePoint ListDisadvantage Of LINQ:    -  List Attachment is not supported in SPMetal Tool    -  Created By, Created, Modified and Modified By Fields are not created by SPMetal Tool.    -  Custom fields are not created by SPMetal Tools    -  External Lists are not supported    -  Though at backend LINQ genenates CAML Query so it is slower than directly using CAML in Code.  Advantage of Client Object Model    -  Used to access sharepoint from client system    -  No WebServer is required at Client End    - Can use Silverlight and JavaScripts to make better and fast User experienceDisadvantage of Client Object Model    -  You cannot use RunwithEleveatedPrivilege    - Cross Site Collection query are not possible    - Lesser API's are availableADO.Net Data Services:    -  Only List based operations are possible, other type of operations are not possible.SharePoint Web Services and RPC Call:    - Previously it was used in SharePoint 2007 but after the introduction  of Client Object Model,  Microsoft recommends not to use Web Services to fetch data from SharePoint. In SharePoint 2010 it is avaliable only for backward compatibility.Ref: http://msdn.microsoft.com/en-us/library/ee539764Jayant Sharma

    Read the article

  • Getting started with Document Set in SharePoint2010

    - by ybbest
    Folders are widely used in traditional file based system, in SharePoint world you can create folder in the document library as well. However, there is a new improved feature in SharePoint called Document Set; you can attach metadata to the document set. To get start with Document set, you can perforce the following steps. 1. Go to Site Settings >>Site collection features >>Activate the Document Sets feature. 2. After the Document Sets feature is activated, you will get a new content type called Document Set. 3. Next, we can create a custom content type called Loan Application Document Set that inherited from Document Set Content Type. 4. Then I create a new column called Application Number. 5. Add this field to the loan application content type 6. Create a new Content Type called Loan Contract form that inherited from Document content type. 7. Add the Application Number to the Loan Contract form content type. 8. Create a new Content Type called Loan Application form that inherited from Document content type and add Application Number to it.(The same step as above.) 9.Go to the Loan Application Document Set content type and go to the Document Set Settings. 10. You can define which content type you would like this Document set contains and you can also define the default document for each content type. When you create a new document set, those default documents will get automatically created in the document set. You can also define the Shared field that shared across content types; in my case I define the Application number and description as my shared fields. Finally, you can define the fields that you’d like to show in the document set welcome page. 11. Now create a new document library and attach those content types to the document library and create a new loan application document set. 12. You will see the default document created in the document set.If you updated Application Number on the document set , the field will get updated in the documents inside the document set as well.

    Read the article

  • SQL SERVER – Creating All New Database with Full Recovery Model

    - by pinaldave
    Sometimes, complex problems have very simple solutions. Let us see the following email which I received recently. “Hi Pinal, In our system when we create new database, by default, they are all created with the Simple Recovery Model. We have to manually change the recovery model after we create the database. We used the following simple T-SQL code: CREATE DATABASE dbname. We are very frustrated with this situation. We want all our databases to have the Full Recovery Model option by default. We are considering the following methods; please suggest the most efficient one among them. 1) Creating a Policy; when it is violated, the database model can be fixed 2) Triggers at Server Level 3) Automated Job which goes through all the databases and checks their recovery model; if the DBA has not changed the model, then the job will list the Databases and change their recovery model Also, we have a situation where we need a database in the Simple Recovery Model as well – how to white list them? Please suggest the best method.” Indeed, an interesting email! The answer to their question, i.e., which is the best method to fit their needs (white list, default, etc)? It will be NONE of the above. Here is the solution in one line and also the easiest way: Just go to your Model database: Path in SSMS >> Databases > System Databases >> model >> Right Click Properties >> Options >> Recovery Model - Select Full from dropdown. Every newly created database takes its base template from the Model Database. If you create a custom SP in the Model Database, when you create a new database, it will automatically exist in that database. Any database that was already created before making changes in the Model Database will not be affected at all. Creating Policy is also a good method, and I will blog about this in a separate blog post, but looking at current specifications of the reader, I think the Model Database should be modified to have a Full Recovery Option. While writing this blog post, I remembered my another blog post where the model database log file was growing drastically even though there were no transactions SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big. NOTE: Please do not touch the Model Database unnecessary. It is a strict “No.” If you want to create an object that you need in all the databases, then instead of creating it in model database, I suggest that you create a new database called maintenance and create the object there. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Siegertypen unter sich - beim Oracle Partner Day in Frankfurt

    - by A&C Redaktion
    WIR WARTEN AUF SIE! ORACLE PARTNER DAY - 29. OKTOBER 2012 Erfolg hat immer eine sportliche Komponente – Ehrgeiz, Wissen und Durchhaltevermögen sind mit entscheidend, um in die nächste Runde zu kommen. Sie als Partner können jetzt in einer neuen Liga spielen, denn ab sofort haben Sie die Möglichkeit, das gesamte Oracle Red Stack Produktportfolio – Software, Hardware und Applications – zu verkaufen. An diesem Tag erwarten Sie: David Callaghan, Senior Vice President EMEA Alliances & Channels Jürgen Kunz, Senior Vice President Northern Europe & Country Leader Germany Silvia Kaske, Senior Director Alliances & Channels Europe North Christian Werner, Senior Director Alliances & Channels Germany Sie haben Fragen an die Executives? Im Oracle Leaders Panel (ab 17 Uhr) werden diese live beantwortet. Schicken Sie uns Ihre Fragen einfach zu: per SMS an +49 176 84879149, per Email, via Facebook oder über Twitter.Das neue A&C Coaching Team steht. Sind Sie dabei?Spitzenleistung braucht eine breite Basis: die neue Mannschaftsaufstellung lernen Sie vor Ort live kennen. Holen Sie sich Unterstützung. Und profitieren Sie von einem exzellenten Netzwerk – Ihrer Partnerschaft auf dem Weg nach oben. Steigen Sie mit Alliances & Channels gemeinsam in die neue Liga auf, das mit seinem Produktportfolio am Markt seinesgleichen sucht. Nur noch wenige Tage bis zum Oracle Partner Day 2012!Aber noch nicht zu spät, um sich zu registrieren. Machen Sie sich jetzt auf den Weg an die Spitze! Melden Sie sich hier gleich an.Noch besteht die Möglichkeit für ein 1:1 Meeting mit ausgewählten Oracle Managern vor Ort. Kreuzen Sie Ihren Wunschpartner an, mit dem Sie sich gerne austauschen möchten. Sie machen den Test. Wir zahlen die Testgebühr!Nutzen Sie die Gelegenheit, sich direkt zum OPN Implementation Specialist zu akkreditieren! Melden Sie sich jetzt zum offiziellen Implementierungstest beim Testcenter Pearson Vue vor Ort beim Oracle Partner Day an. Wählen Sie Ihre Fachbereiche aus Fusion Middleware, Applications, Hardware, Datenbank und gehen Sie als Implementierungsspezialist nach Hause. Jetzt kommt der Ball ins Rollen! Sportlicher Abschluss in der Commerzbank ArenaNach den fachlichen Themen des Tages wollen wir den Abend in sportlichem Rahmen mit Ihnen ausklingen lassen. Die „Oracle Red Stack Arena Sport Challenge“ hält für Sie einige Überraschungen bereit. Nur so viel sei verraten: Es wird auch ein mit QR-Codes verknüpfes Gewinnspiel mit attraktiven Preisen geben. Bereiten Sie Ihr Smartphone dafür vor. Wir freuen uns auf Sie!

    Read the article

  • Siegertypen unter sich - beim Oracle Partner Day in Frankfurt

    - by A&C Redaktion
    WIR WARTEN AUF SIE! ORACLE PARTNER DAY - 29. OKTOBER 2012 Erfolg hat immer eine sportliche Komponente – Ehrgeiz, Wissen und Durchhaltevermögen sind mit entscheidend, um in die nächste Runde zu kommen. Sie als Partner können jetzt in einer neuen Liga spielen, denn ab sofort haben Sie die Möglichkeit, das gesamte Oracle Red Stack Produktportfolio – Software, Hardware und Applications – zu verkaufen. An diesem Tag erwarten Sie: David Callaghan, Senior Vice President EMEA Alliances & Channels Jürgen Kunz, Senior Vice President Northern Europe & Country Leader Germany Silvia Kaske, Senior Director Alliances & Channels Europe North Christian Werner, Senior Director Alliances & Channels Germany Sie haben Fragen an die Executives? Im Oracle Leaders Panel (ab 17 Uhr) werden diese live beantwortet. Schicken Sie uns Ihre Fragen einfach zu: per SMS an +49 176 84879149, per Email, via Facebook oder über Twitter.Das neue A&C Coaching Team steht. Sind Sie dabei?Spitzenleistung braucht eine breite Basis: die neue Mannschaftsaufstellung lernen Sie vor Ort live kennen. Holen Sie sich Unterstützung. Und profitieren Sie von einem exzellenten Netzwerk – Ihrer Partnerschaft auf dem Weg nach oben. Steigen Sie mit Alliances & Channels gemeinsam in die neue Liga auf, das mit seinem Produktportfolio am Markt seinesgleichen sucht. Nur noch wenige Tage bis zum Oracle Partner Day 2012!Aber noch nicht zu spät, um sich zu registrieren. Machen Sie sich jetzt auf den Weg an die Spitze! Melden Sie sich hier gleich an.Noch besteht die Möglichkeit für ein 1:1 Meeting mit ausgewählten Oracle Managern vor Ort. Kreuzen Sie Ihren Wunschpartner an, mit dem Sie sich gerne austauschen möchten. Sie machen den Test. Wir zahlen die Testgebühr!Nutzen Sie die Gelegenheit, sich direkt zum OPN Implementation Specialist zu akkreditieren! Melden Sie sich jetzt zum offiziellen Implementierungstest beim Testcenter Pearson Vue vor Ort beim Oracle Partner Day an. Wählen Sie Ihre Fachbereiche aus Fusion Middleware, Applications, Hardware, Datenbank und gehen Sie als Implementierungsspezialist nach Hause. Jetzt kommt der Ball ins Rollen! Sportlicher Abschluss in der Commerzbank ArenaNach den fachlichen Themen des Tages wollen wir den Abend in sportlichem Rahmen mit Ihnen ausklingen lassen. Die „Oracle Red Stack Arena Sport Challenge“ hält für Sie einige Überraschungen bereit. Nur so viel sei verraten: Es wird auch ein mit QR-Codes verknüpfes Gewinnspiel mit attraktiven Preisen geben. Bereiten Sie Ihr Smartphone dafür vor. Wir freuen uns auf Sie!

    Read the article

  • How to Quickly Add Multiple IP Addresses to Windows Servers

    - by Sysadmin Geek
    If you have ever added multiple IP addresses to a single Windows server, going through the graphical interface is an incredible pain as each IP must be added manually, each in a new dialog box. Here’s a simple solution. Needless to say, this can be incredibly monotonous and time consuming if you are adding more than a few IP addresses. Thankfully, there is a much easier way which allows you to add an entire subnet (or more) in seconds. Adding an IP Address from the Command Line Windows includes the “netsh” command which allows you to configure just about any aspect of your network connections. If you view the accepted parameters using “netsh /?” you will be presented with a list of commands each which have their own list of commands (and so on). For the purpose of adding IP addresses, we are interested in this string of parameters: netsh interface ipv4 add address Note: For Windows Server 2003/XP and earlier, “ipv4″ should be replaced with just “ip” in the netsh command. If you view the help information, you can see the full list of accepted parameters but for the most part what you will be interested in is something like this: netsh interface ipv4 add address “Local Area Connection” 192.168.1.2 255.255.255.0 The above command adds the IP Address 192.168.1.2 (with Subnet Mask 255.255.255.0) to the connection titled “Local Area Network”. Adding Multiple IP Addresses at Once When we accompany a netsh command with the FOR /L loop, we can quickly add multiple IP addresses. The syntax for the FOR /L loop looks like this: FOR /L %variable IN (start,step,end) DO command So we could easily add every IP address from an entire subnet using this command: FOR /L %A IN (0,1,255) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 This command takes about 20 seconds to run, where adding the same number of IP addresses manually would take significantly longer. A Quick Demonstration Here is the initial configuration on our network adapter: ipconfig /all Now run netsh from within a FOR /L loop to add IP’s 192.168.1.10-20 to this adapter: FOR /L %A IN (10,1,20) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 After the above command is run, viewing the IP Configuration of the adapter now shows: Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

< Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >