Search Results

Search found 15535 results on 622 pages for 'mat keep'.

Page 540/622 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • ArchBeat Link-o-Rama Top 10 for August 2012

    - by Bob Rhubart
    The Top 10 most popular items shared via the OTN ArchBeat Facebook page for the month of August 2012. Now Available: Oracle SQL Developer 3.2 (3.2.09.23) New features include APEX listener, UI enhancements, and 12c database support. The Role of Oracle VM Server for SPARC in a Virtualization Strategy In this article, Matthias Pfutzner discusses hardware, desktop, and operating system virtualization, along with various Oracle virtualization technologies, including Oracle VM Server for SPARC. How to Manually Install Flash Player Plugin to see the Oracle Enterprise Manager Performance Page | Kai Yu So, you're a DBA and you want to check the Performance page in Oracle Enterprise Manager (11g or 12c). So you click the Performance tab and… nothing. Zip. Nada. The Flash plugin is a no-show. Relax! Oracle ACE Director Kai Yu shows you what you need to do to see all the pretty colors instead of that dull grey screen. Relationally Challenged (CX - CRM - EQ/RQ/CRQ) | Chris Warticki Self-proclaimed Oracle Support "spokesmodel" Chris Chris Warticki has some advice for those interested in Customer Relationship Management: "How about we just dumb it down, strip it to the core, keep it simple and LISTEN?! No more focus groups, no more surveys, and no need to gather more data. We have plenty of that. Why not just provide the customer what they are asking for?" Free WebLogic Server Course | Middleware Magic So you want to sharpen your Oracle WebLogic Server skills, but you prefer to skip the whole classroom bit and don't want to be bothered with dealing with an instructor? No problem! Oracle ACE Rene van Wijk, a prolific Middleware Magic blogger, has information on an Oracle WebLogic course you can take on your own time, at your own pace. Oracle VM VirtualBox 4.1.20 released Oracle VM VirtualBox 4.1.20 was just released at the community and Oracle download sites, reports the Fat Bloke. This is a maintenance release containing bug fixes and stability improvements. Optimizing OLTP Oracle Database Performance using Dell Express Flash PCIe SSDs | Kai Yu Oracle ACE Director Kai Yu shares resources based on "several extensive performance studies on a single node Oracle 11g R2 database as well as a two node 11gR2 Oracle Real Application clusters (RAC) database running on Dell PowerEdge R720 servers with Dell Express Flash PCIe SSDs on Oracle Enterprise Linux 6.2 platform." Oracle ACE sessions at Oracle OpenWorld With so many great sessions at this year's event, building your Oracle OpenWorld schedule can involve making a lot of tough choices. But you'll find that the sessions led by Oracle ACEs just might be the icing on the cake for your OpenWorld experience. MySQL Update: The Cleveland MySQL Meetup (Independence, OH) Oracle MySQL team member Benjamin Wood, a MySQL engineer and five year veteran of the MySQL organization, will speak at the Cleveland MySQL Meetup event on September 12. The presentation will include a MySQL 5.5 Overview, Oracle's Roadmap for MySQL, including specifics on MySQL 5.6, best practices and how to overcome development and operational MySQL challenges, and the new MySQL commercial extensions. Click the link for time and location information. Parsing XML in Oracle Database | Martijn van der Kamp Martijn van der Kamp's post deals with processing XML in PL/SQL code and processing the data into the database. Thought for the Day "Walking on water and developing software from a specification are easy if both are frozen." — Edward V. Berard Source: SoftwareQuotes.com

    Read the article

  • Using GMail's SMTP and IMAP servers in Notification Mailer

    - by Saroja Kandepuneni
    Overview GMail offers free, reliable, popular SMTP and IMAP services, because of which many people are interested to use it. GMail can be used when there are no in-house SMTP/IMAP servers for testing or debugging purposes. This blog explains how to install GMail SSL certificate in Concurrent Tier, testing the connection using a standalone program, running Mailer diagnostics and configuring GMail IMAP and SMTP servers for Workflow Notification Mailer Inbound and Outbound connections. GMail servers configuration SMTP server Host Name  smtp.gmail.com SSL Port  465 TLS/SSL required  Yes User Name  Your full email address (including @gmail.com or @your_domain.com) Password  Your gmail passwor  IMAP server  Host Name imap.gmail.com  SSL Port 993 TLS/SSL Required Yes  User Name  Your full email address (including @gmail.com or @your_domain.com)  Password Your gmail password GMail SSL Certificate Installation The following is the procedure to install the GMail SSL certificate Copy the below GMail SSL certificate to a file eg: gmail.cer -----BEGIN CERTIFICATE-----MIIDWzCCAsSgAwIBAgIKaNPuGwADAAAisjANBgkqhkiG9w0BAQUFADBGMQswCQYDVQQGEwJVUzETMBEGA1UEChMKR29vZ2xlIEluYzEiMCAGA1UEAxMZR29vZ2xlIEludGVybmV0IEF1dGhvcml0eTAeFw0xMTAyMTYwNDQzMDRaFw0xMjAyMTYwNDUzMDRaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKEwpHb29nbGUgSW5jMRcwFQYDVQQDEw5pbWFwLmdtYWlsLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAqfPyPSEHpfzvXx+9zGUxoxcOXFrGKCbZ8bfUd8JonC7rfId32t0gyAoLCgM6eU4lN05VenNZUoChL/nrX+ApdMQv9UFV58aYSBMU/pMmK5GXansbXlpHao09Mc8eur2xV+4cnEtxUvzpco/OaG15HDXcr46c6hN6P4EEFRcb0ccCAwEAAaOCASwwggEoMB0GA1UdDgQWBBQj27IIOfeIMyk1hDRzfALz4WpRtzAfBgNVHSMEGDAWgBS/wDDr9UMRPme6npH7/Gra42sSJDBbBgNVHR8EVDBSMFCgTqBMhkpodHRwOi8vd3d3LmdzdGF0aWMuY29tL0dvb2dsZUludGVybmV0QXV0aG9yaXR5L0dvb2dsZUludGVybmV0QXV0aG9yaXR5LmNybDBmBggrBgEFBQcBAQRaMFgwVgYIKwYBBQUHMAKGSmh0dHA6Ly93d3cuZ3N0YXRpYy5jb20vR29vZ2xlSW50ZXJuZXRBdXRob3JpdHkvR29vZ2xlSW50ZXJuZXRBdXRob3JpdHkuY3J0MCEGCSsGAQQBgjcUAgQUHhIAVwBlAGIAUwBlAHIAdgBlAHIwDQYJKoZIhvcNAQEFBQADgYEAxHVhW4aII3BPrKQGUdhOLMmdUyyr3TVmhJM9tPKhcKQ/IcBYUev6gLsB7FH/n2bIJkkIilwZWIsj9jVJaQyJWP84Hjs3kus4fTpAOHKkLqrbIZDYjwVueLmbOqr1U1bNe4E/LTyEf37+Y5hcveWBQduIZnHn1sDE2gA7LnUxvAU=-----END CERTIFICATE----- Install the SSL certificate into the default JRE location or any other location using below command Installing into a dfeault JRE location in EBS instance         # keytool -import -trustcacerts -keystore $AF_JRE_TOP/lib/security/cacerts  -storepass changeit -alias gmail-lnx_chainnedcert -file gmail.cer Install into a custom location         # keytool -import -trustcacerts -keystore <customLocation>  -storepass changeit -alias gmail-lnx_chainnedcert -file gmail.cer       <customLocation> -- directory in instance where the certificate need to be installed After running the above command you can see the following response         Trust this certificate? [no]:  yes        Certificate was added to keystore Running Mailer Command Line Diagnostics Run Mailer command line diagnostics from conccurrent tier where Mailer is running, to check the IMAP connection using the below command $AFJVAPRG -classpath $AF_CLASSPATH -Dprotocol=imap -Ddbcfile=$FND_SECURE/$TWO_TASK.dbc -Dserver=imap.gmail.com -Dport=993 -Dssl=Y -Dtruststore=$AF_JRE_TOP/lib/security/cacerts -Daccount=<gmail username> -Dpassword=<password> -Dconnect_timeout=120 -Ddebug=Y -Dlogfile=GmailImapTest.log -DdebugMailSession=Y oracle.apps.fnd.wf.mailer.Mailer Run Mailer command line diagnostics from concurrent tier where Mailer is running, to check the SMTP connection using the below command   $AFJVAPRG -classpath $AF_CLASSPATH -Dprotocol=smtp -Ddbcfile=$FND_SECURE/$TWO_TASK.dbc -Dserver=smtp.gmail.com -Dport=465 -Dssl=Y -Dtruststore=$AF_JRE_TOP/lib/security/cacerts -Daccount=<gmail username> -Dpassword=<password> -Dconnect_timeout=120 -Ddebug=Y -Dlogfile=GmailSmtpTest.log -DdebugMailSession=Y oracle.apps.fnd.wf.mailer.Mailer Standalone program to verify the IMAP connection Run the below standalone program from the concurrent tier node where Mailer is running to verify the connection with GMail IMAP server. It connects to the Gmail IMAP server with the given GMail user name and password and lists all the folders that exist in that account. If the Gmail IMAP server is not working for the  Mailer check whether the PROCESSED and DISCARD folders exist for the GMail account, if not create manually by logging into GMail account.Sample program to test GMail IMAP connection  The standalone program can be run as below  $java GmailIMAPTest GmailUsername GMailUserPassword            Standalone program to verify the SMTP connection Run the below standalone program from the concurrent tier node where Mailer is running to verify the connection with GMail SMTP server. It connects to the GMail SMTP server by authenticating with the given user name and password  and sends a test email message to the give recipient user email address. Sample program to test GMail SMTP connection The standalone program can be run as below  $java GmailSMTPTest GmailUsername gMailPassword recipientEmailAddress    Warnings As gmail.com is an external domain, the Mailer concurrent tier should allow the connection with GMail server Please keep in mind when using it for corporate facilities, that the e-mail data would be stored outside the corporate network

    Read the article

  • Creating a Synchronous BPEL composite using File Adapter

    - by [email protected]
    By default, the JDeveloper wizard generates asynchronous WSDLs when you use technology adapters. Typically, a user follows these steps when creating an adapter scenario in 11g: 1) Create a SOA Application with either "Composite with BPEL" or an "Empty Composite". Furthermore, if  the user chooses "Empty Composite", then he or she is required to drop the "BPEL Process" from the "Service Components" pane onto the SOA Composite Editor. Either way, the user comes to the screen below where he/she fills in the process details. Please note that the user is required to choose "Define Service Later" as the template. 2) Creates the inbound service and outbound references and wires them with the BPEL component:     3) And, finally creates the BPEL process with the initiating <receive> activity to retrieve the payload and an <invoke> activity to write the payload.     This is how most BPEL processes that use Adapters are modeled. And, if we scrutinize the generated WSDL, we can clearly see that the generated WSDL is one way and that makes the BPEL process asynchronous (see below)   In other words, the inbound FileAdapter would poll for files in the directory and for every file that it finds there, it would translate the content into XML and publish to BPEL. But, since the BPEL process is asynchronous, the adapter would return immediately after the publish and perform the required post processing e.g. deletion/archival and so on.  The disadvantage with such asynchronous BPEL processes is that it becomes difficult to throttle the inbound adapter. In otherwords, the inbound adapter would keep sending messages to BPEL without waiting for the downstream business processes to complete. This might lead to several issues including higher memory usage, CPU usage and so on. In order to alleviate these problems, we will manually tweak the WSDL and BPEL artifacts into synchronous processes. Once we have synchronous BPEL processes, the inbound adapter would automatically throttle itself since the adapter would be forced to wait for the downstream process to complete with a <reply> before processing the next file or message and so on. Please see the tweaked WSDL below and please note that we have converted the one-way to a two-way WSDL and thereby making the WSDL synchronous: Add a <reply> activity to the inbound adapter partnerlink at the end of your BPEL process e.g.   Finally, your process will look like this:   You are done.   Please remember that such an excercise is NOT required for Mediator since the Mediator routing rules are sequential by default. In other words, the Mediator uses the caller thread (inbound file adapter thread) for processing the routing rules. This is the case even if the WSDL for mediator is one-way.

    Read the article

  • Share Folders & Files Between Vista and XP Machines

    - by Mysticgeek
    Since Microsoft has three operating systems in use, chances are you’ll find yourself needing to share files between XP, Vista, Windows 7, or some combination of the three. Here we take a look at sharing between a Vista and XP on your home network. Share Without Password Protected Sharing If you’re not worried about who’s accessing the files and folders, the easiest method is to disable Password Protected Sharing. So on the Vista machine open Network and Sharing Center. Under Sharing and Discovery make sure Network Discovery, File Sharing, and, Public Folder Sharing are turned on. Also turn off Password Protected Sharing… Now go into the Vista Public folder, located in C:\Users\Public, and add what you want to share or create a new folder. In this example we created a new folder called XP_Share and added some files to it. On the XP machine go into My Network Places and under Network Tasks click on View Workgroup Computers. Now you’ll see all of the computers on your network which should be part of the same Workgroup. Here we need to double-click on the Vista computer. And there we go…no password to enter so we can access the XP_Share folder or anything else that is located in the Public folder. Share with Password Protected Sharing If you want to keep Password Protected Sharing turned on, then we need to do things a little different. When it’s turned on and you try to access the Vista machine from XP, you’re prompted for a password, and no matter what you think the credentials are, you can’t get access…very annoying. So what we need to do is add the XP Machine as a user. Right-click on Computer from the Start Menu or desktop icon and select Manage from the context menu. The Computer Management screen opens up and you want to expand Local Users and Groups, then the Users folder. Then right-click any open area an select New User. Now create a new user name and password, you can also fill in the other fields if you want. Then make sure to uncheck User must change password at next logon and check the box next to Password never expires. Click the Create button and close out of the New User screen. You’ll then see the new user we created in the list and you can close out of the Computer Management window. Now back on the XP computer when you double-click on the Vista machine, your prompted to log in. Just type in the username and password you just created. Now you’ll have access to the Public folder contents. Set up Sharing on XP If you want to access a shared folder from the Vista computer located on the XP machine, it’s the same process in reverse. On the XP computer in Shared Documents, right-click on the folder you want to share and select Sharing and Security. Then select the radio button next to Share this folder and click Ok. Go into Computer Management and create a new user… Now from the Vista machine double click on the XP machine icon, enter the password, then access the folders and files you need. If you have multiple versions of Windows on your home network, you’ll now be able to access files and folders from each of them. If you want to share between Windows 7 and XP check out our article on how to share files and printers between Windows 7 and XP. You might also want to check out our article on how to share files and printers between Windows 7 and Vista. Similar Articles Productive Geek Tips Show Hidden Files and Folders in Windows 7 or VistaHow To Share Files and Printers Between Windows 7 and VistaShare Files and Printers between Windows 7 and XPHow To Share a Folder the XP Way in Windows VistaMoving Your Personal Data Folders in Windows Vista the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox) Perform a Background Check on Yourself

    Read the article

  • std::map for storing static const Objects

    - by Sean M.
    I am making a game similar to Minecraft, and I am trying to fine a way to keep a map of Block objects sorted by their id. This is almost identical to the way that Minecraft does it, in that they declare a bunch of static final Block objects and initialize them, and then the constructor of each block puts a reference of that block into whatever the Java equivalent of a std::map is, so there is a central place to get ids and the Blocks with those ids. The problem is, that I am making my game in C++, and trying to do the exact same thing. In Block.h, I am declaring the Blocks like so: //Block.h public: static const Block Vacuum; static const Block Test; And in Block.cpp I am initializing them like so: //Block.cpp const Block Block::Vacuum = Block("Vacuum", 0, 0); const Block Block::Test = Block("Test", 1, 0); The block constructor looks like this: Block::Block(std::string name, uint16 id, uint8 tex) { //Check for repeat ids if (IdInUse(id)) { fprintf(stderr, "Block id %u is already in use!", (uint32)id); throw std::runtime_error("You cannot reuse block ids!"); } _id = id; //Check for repeat names if (NameInUse(name)) { fprintf(stderr, "Block name %s is already in use!", name); throw std::runtime_error("You cannot reuse block names!"); } _name = name; _tex = tex; //fprintf(stdout, "Using texture %u\n", _tex); _transparent = false; _solidity = 1.0f; idMap[id] = this; nameMap[name] = this; } And finally, the maps that I'm using to store references of Blocks in relation to their names and ids are declared as such: std::map<uint16, Block*> Block::idMap = std::map<uint16, Block*>(); //The map of block ids std::map<std::string, Block*> Block::nameMap = std::map<std::string, Block*>(); //The map of block names The problem comes when I try to get the Blocks in the maps using a method called const Block* GetBlock(uint16 id), where the last line is return idMap.at(id);. This line returns a Block with completely random values like _visibility = 0xcccc and such like that, found out through debugging. So my question is, is there something wrong with the blocks being declared as const obejcts, and then stored at pointers and accessed later on? The reason I cant store them as Block& is because that makes a copy of the Block when it is entered, so the block wouldn't have any of the attributes that could be set afterwards in the constructor of any child class, so I think I need to store them as a pointer. Any help is greatly appreciated, as I don't fully understand pointers yet. Just ask if you need to see any other parts of the code.

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • Industry perspectives on managing content

    - by aahluwalia
    Earlier this week I was noodling over a topic for my first blog post. My intention for this blog is to bring a practitioner's perspective on ECM to the community; to share and collaborate on best practices and approaches that address today's business problems. Reviewing my past 14 years of experience with web technologies, I wondered what topic would serve as a good "conversation starter". During this time, I received a call from a friend who was seeking insights on how content management applies to specific industries. She approached me because she vaguely remembered that I had worked in the Health Insurance industry in the recent past. She wanted me to tell her about the specific business needs of this industry. She was in for quite a surprise as she found out that I had spent the better part of a decade managing content within the Health Insurance industry and I discovered a great topic for my first blog post! I offer some insights from Health Insurance and invite my fellow practitioners to share their insights from other industries. What does content management mean to these industries? What can solution providers be aware of when offering solutions to these industries? The United States health care system relies heavily on private health insurance, which is the primary source of coverage for approximately 58% Americans. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance. In the late 20th century, traditional disability insurance evolved into modern health insurance programs. The first thing a solution provider must be aware of about the Health Insurance industry is that it tends to be transaction intensive. They are the ones who manage and administer our health plans and process our claims when we visit our health care providers. It helps to keep in mind that they are in the business of delivering health insurance and not technology. You may find the mindset conservative in comparison to the IT industry, however, the Health Insurance industry has benefited and will continue to benefit from the efficiency that technology brings to traditionally paper-driven processes. We are all aware of the impact that Healthcare reform bill has had a significant impact on the Health Insurance industry. They are under a great deal of pressure to explore ways to reduce their administrative costs and increase operational efficiency. Overall, administrative costs of health insurance include the insurer's cost to administer the health plan, the costs borne by employers, health-care providers, governments and individual consumers. Inefficiencies plague health insurance, owing largely to the absence of standardized processes across the industry. To achieve this, industry leaders have come together to establish standards and invest in initiatives to help their healthcare provider partners transition to the next generation of healthcare technology. The move to online services and paperless explanation of benefits are some manifestations of technological advancements in health insurance. Several companies have adopted Toyota's LEAN methodology or Six Sigma principles to improve quality, reduce waste and excessive costs, thereby increasing the value of their plan offerings. A growing number of health insurance companies have transformed their business systems in the past decade alone and adopted some form of content management to reduce the costs involved in administering health plans. The key strategy has been to convert paper documents and forms into electronic formats, automate the content development process and securely distribute content to various audiences via diverse marketing channels, including web and mobile. Enterprise content management solutions can enable document capture of claim forms, manage digital assets, integrate with Enterprise Resource Planning (ERP) and Human Capital Management (HCM) solutions, build Business Process Management (BPM) processes, define retention and disposition instructions to comply with state and federal regulations and allow eBusiness and Marketing departments to develop and deliver web content to multiple websites, mobile devices and portals. Content can be shared securely within and outside the organization using Information Rights Management.  At the end of the day, solution providers who can translate strategic goals into solutions that maximize process automation, increase ease of use and minimize IT overhead are likely to be successful in today's health insurance environment.

    Read the article

  • Release 17 is here!

    - by Cheryl
    Our training development team has been busy updating courses to keep pace with the new release of CRM On Demand. Release 17 is here! And I heard recently that it's one of our biggest releases ever. A lot of new features and functionality for you to take advantage of - too much for me to cover in this blog post. But, I thought I'd tell you about a few of my favorites - be sure to take a look at the What's New in Release 17 recording to see the full list, though...because I'm only going to touch on a few. Create your own look - okay, I'm starting with the fun stuff. But, there is a new customizable themes feature so that you can change the look of the application; colors, logo, the shape of the tabs. And it's really easy. There's also a whole new library of ready-made themes for you to pick from if you just want to go with one of those. Use this new feature to match the look of your company logo and color scheme. Or blaze new trails. You can create the look for the whole company, or a different look for each CRM On Demand role. This might especially come in handy if you're using the Partner Relationship Management (PRM) capabilities of CRM On Demand - you can create themes for your partner-facing roles to provide branded partner portals. Speaking of PRM - there are enhancements in this release to help companies better manage their partner relationships. A new Deal Registration object, which is separate from the Opportunity record, and better Special Pricing Request and Marketing Development Fund Request processes, give a lot more flexibility in how companies can build and manage their relationships with partners. Some new options for Forecasts in in Release 17, too. You can now have more than one type of forecast generated each forecast period. For example, you might need to see a forecast of the total opportunity revenue for your sales team, as well as on that breaks down revenue by product. The forecast definition now lets you do that. Other options allow you to make submitting forecasts easier, split opportunity revenue across the team and forecast that split appropriately. And - look for the new Forecast subject area in Answers, for building custom forecast reports. Ever wish you could use Workflow Rules to automatically reassign leads if they haven't been followed up on...or to email a manager if the status of a service request isn't changed after a specified period of time? Then check out the new Wait action for workflows. I think you'll be happy. Ok, enough for today. There is a lot to Release 17 that I didn't mention - a lot has been added for our Life Science industry edition, some new data visibility options, a new Data Loader tool, and more. Stay tuned for more blog posts about these and other Release 17 features in the coming weeks. In the meantime, don't forget about all of the resources we have for you to learn more (see my Learning About Release 17 blog post for details).

    Read the article

  • The Latest News About SAP

    - by jmorourke
    Like many professionals, I get a lot of my news from Google e-mail alerts that I’ve set up to keep track of key industry trends and competitive news.  In the past few weeks, I’ve been getting a number of news alerts about SAP.  Below are a few recent examples: Warm weather cuts short US maple sugaring season – by Toby Talbot, AP MILWAUKEE – Temperatures in Wisconsin had already hit the high 60s when Gretchen Grape and her family began tapping their 850 maple trees. They had waited for the state's ceremonial tapping to kick off the maple sugaring season. It was moved up five days, but that didn't make much difference. For Grape, the typically month-long season ended nine days later. The SAP had stopped flowing in a record-setting heat wave, and the 5-quart collection bags that in a good year fill in a day were still half-empty. Instead of their usual 300 gallons of syrup, her family had about 40. Maple syrup producers across the North have had their season cut short by unusually warm weather. While those with expensive, modern vacuum systems say they've been able to suck a decent amount of sap from their trees, producers like Grape, who still rely on traditional taps and buckets, have seen their year ruined. "It's frustrating," said the 69-year-old retiree from Holcombe, Wis. "You put in the same amount of work, equipment, investment, and then all of a sudden, boom, you have no SAP." Home & Garden: Too-Early Spring Means Sugaring Woes  - by Georgeanne Davis for The Free Press Over this past weekend, forsythia and daffodils were blooming in the southern parts of the state as temperatures climbed to 85 degrees, and trees began budding out, putting an end to this year's maple syrup production even as the state celebrated Maine Maple Sunday. Maple sugaring needs cold nights and warm days to induce SAP flows. Once the trees begin budding, SAP can still flow, but the SAP is bitter and has an off taste. Many farmers and dairymen count on sugaring for extra income, so the abbreviated season is a real financial loss for them, akin to the shortened shrimping season's effect on Maine lobstermen. SAP season comes to a sugary Sunday finale – Kennebec Journal, March 26th, 2012 Rebecca Manthey stood out in the rain at the entrance of Old Fort Western keeping watch over a cast iron kettle of boiling SAP hooked to a tripod over a wood fire.  Manthey and the rest of the Old Fort Western staff -- decked out in 18th-century attire -- joined sugar houses across the state in observance of Maine Maple Sunday. The annual event is sponsored by the Department of Agriculture and the Maine Maple Producers Association.  She said the rain hadn't kept people from coming to enjoy all the events at the fort surrounding the production of Maple syrup.  "In the 18th century, you would be boiling SAP in the woods, so I would be in the woods," Manthey explained to the families who circled around her. "People spent weeks and weeks in the woods. You don't want to cook it to fast or it would burn. When it looks like the right consistency then you send it (into the kitchen) to be made into sugar." Manthey said she enjoyed portraying an 18th-century woman, even in the rain, which didn't seem to bother visitors either. There was a steady stream of families touring the fort and enjoying the maple syrup demonstrations. I hope you enjoy these updates on SAP – Happy April Fool’s Day!

    Read the article

  • Where's My Windows Azure Subscriptions

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/03/wheres-my-windows-azure-subscriptions.aspxYesterday when I opened Windows Azure manage portal I found some resources were missed. I checked the website for those missed cloud service and they are still live. Then I checked my billing history but didn't found any problem. When I back to the portal I found that all of those resource are under my MSDN subscription. So I remembered that if this is related with the recently Windows Azure platform update.   This feature named "Enterprise Management", which provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution. By default, all existing windows azure account would have a default Windows Azure Active Directory (a.k.a. WAAD) associated. In the address bar I can find the default login WAAD of my account, which is "microsoft.onmicrosoft.com". To change the WAAD we can click "subscriptions" on top of the manage portal, select the active directory from the list of "filter by directory" and select the subscription we want to see, then press "apply". As you can see, the subscription under my MSDN was located in a WAAD named "beijingtelecom.onmicrosoft.com". This is because when Microsoft applied this feature, they will check if you have an existing WAAD in your subscription. If not, it will create a new one, otherwise it will use your WAAD and move your subscription into this directory. Since I created a WAAD for test several months ago, this subscription was moved to this directory.   To change the subscription's directory is simple. First we need to create a new WAAD with the name we preferred. As below I created a new directory named "shaunxu". Then select "settings" from the left navigation bar, select the subscription we wanted to change and click "edit directory". You don't have the permission to edit/change directory unless your Microsoft Account is the service administrator of this subscription. Then in the popup window, select the WAAD you want to change and press "next". All done. You need to log off and log in the portal then your subscription will be in the directory you wanted. And after these steps I can view my resources in this subscription.   Summary In this post I described how to change subscriptions into a new directory. With this new feature we can manage our Windows Azure subscription more flexible. But there are something we need keep in mind. 1. Only the service administrator could be able to move subscription. 2. Currently there's no way for us to see our Windows Azure services in more than one directory at the same time. Like me, I can see my services under "shaunxu.onmicrosoft.com" and I must change the filter directory from the "subscriptions" menu to see other services under "microsoft.onmicrosoft.com". 3. Currently we cannot delete an existing WAAD.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • PENGUIN IS GETTING READY FOR ORACLE OPENWORLD 2012

    - by Zeynep Koch
    Are you looking for reasons to attend Oracle Openworld, how about below Oracle Linux sessions and hands-on-labs.  1. General Session: Oracle Linux Strategy and Roadmap  In this session, Oracle executives will discuss Linux strategy; the roadmap; contributions to the Linux mainline kernel; and what's in store for upcoming releases of Oracle Linux and the Unbreakable Enterprise Kernel. Don’t miss this session. 2. New Features in Oracle Linux- A Technical Deep Dive Collaborating with the Linux community, Oracle engineers contribute to advancing Linux for mission-critical deployments. In this technical session, attendees will learn about the recent developments in Oracle Linux and the Unbreakable Enterprise Kernel 3. Why Switch to Oracle Linux?  Oracle is the only company that provides a complete Linux solution from applications to disk, fully optimized for Oracle hardware and software, with one-stop support. In this session you will hear from two customers that have successfully implemented Oracle Linux and saved 50 to 90 percent on Linux support costs as well as the reasons to switch to Oracle Linux. 4. Debugging and Configuration Best Practices for Oracle Linux This is one of our best attended sessions and most informative. In this best practices session, learn how to save time and money while preventing headaches and hassles. Discover expert secrets to get your Linux systems up and running (and keep them running), avoid common pitfalls, prevent problems, and circumvent known issues. 5. Top Technical Tips for Automatic and Secure Oracle Linux Deployments In this session, attendees will learn about how to easily deploy and install Oracle Linux systems using various technologies like Kickstart, Oracle Enterprise Manager OpsCenter, and Oracle VM Templates for applications on Linux. Additionally, the session will share useful Linux security tips and introduce utilities to help with hardening and securely operating an Oracle Linux system. We also have a great session in Oracle Develop track: 6. DTrace for Oracle Linux Initially announced at last year's Oracle Openworld, DTrace for Oracle Linux is now available for the Unbreakable Enterprise Kernel R.2. In this session held by one of the engineers working on the DTrace for Linux port, you will learn how you can use this powerful and flexible framework in your development environment. If you prefer to really have practical experience, don’t miss our two Hands-on-Labs where we will cover: HOL-1 : Oracle Linux Package Management: Configuring and Enabling Services In this session you will be Installing and configuring Oracle VM VirtualBox, importing the Oracle Linux virtual appliance. You will then use the package management on Oracle Linux using RPM and yum. You will also be able to review Ksplice, zero downtime kernel updates that enable you to apply security updates, patches and critical bug fixes without rebooting. HOL-2: Oracle Linux Storage Management with LVM and Device Mapper In this session you will learn about storage management with LVM2, the Linux Logical Volume Manager, Btrfs, preparing block devices, creating physical and logical volumes, creating file systems on top of logical volumes, and resizing file systems dynamically. You will also practice setting up software RAID devices, configuring encrypted block devices. You will also see Oracle Linux and Kpslice in the three demopods we will feature at Exhibition demogrounds. One in MySQL Connect and two in Oracle Openworld. What more do you need to come to San Francisco? Oh, I forgot to mention we also have great weather in fall.. Check out the Content Catalog and register to attend Oracle Linux sessions.

    Read the article

  • C++ problem with assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • Bad Spot to Be In: Playing Catch-up with Mobile Advertising

    - by Mike Stiles
    You probably noticed, there’s a mass migration going on from online desktop/laptop usage to smartphone/tablet usage.  It’s an indicator of how we live our lives in the modern world: always on the go, with no intention of being disconnected while out there. Consequently, paid as it relates to mobile advertising is taking the social spotlight. eMarketer estimated that in 2013, US adults would spend about 2 hours, 21 minutes a day on mobile, not counting talking time. More people in the world own smartphones than own toothbrushes (bad news I suppose if you’re marketing toothpaste). They’re using those mobile devices to access social networks, consuming at least 17% of their mobile time on them. Frankly, you don’t need a deep dive into mobile usage stats to know what’s going on. Just look around you in any store, venue or coffee shop. It’s really obvious…our mobile devices are now where we “are,” so that’s where marketers can increasingly reach us. And it’s a smart place for them to do just that. Mobile devices can be viewed more and more as shopping facilitators. Usually when someone is on mobile, they are not in passive research mode. They are likely standing near a store or in front of a product, using their mobile to seek reassurance that buying that product is the right move. They are the hottest of hot prospects. Consider that 4 out of 5 consumers use smartphones to shop, 52% of Americans use mobile devices for in-store for research, 70% of mobile searches lead to online action inside of an hour, and people that find you on mobile convert at almost 3x the rate as those that find you on desktop or laptop. But what are marketers doing? Enter statistics from Mary Meeker’s latest State of the Internet report. Common sense says you buy advertising where people are spending their eyeball time, right? But while mobile is 20% of media use and rising, the ad spend there is 4%. Conversely, while print usage is at 5% and falling, ad spend there is 19%. We all love nostalgia, but come on. There are reasons marketing dollar migration to mobile has not matched user migration, including the availability of mobile ad products and the ability to measure user response to mobile ads. But interesting things are happening now. First came Facebook’s mobile ad, which let app developers pay to get potential downloads. Then their mobile ad network was announced at F8, allowing marketers to target users across non-Facebook apps while leveraging the wealth of diverse data Facebook has on those users, a big deal since Nielsen has pointed out mobile apps make up 89% of the media time spent on mobile. Twitter has a similar play in motion with their MoPub acquisition. And now mobile deeplinks have arrived, which can take users straight to sub-pages of mobile apps for a faster, more direct shopper/researcher user experience. The sooner the gratification, the smoother and faster the conversion. To be clear, growth in mobile ad spending is well underway. After posting $13.1 billion in 2013, Gartner expects global mobile ad spending to reach $18 billion this year, then go to $41.9 billion by 2017. Cheap smartphones and data plans are spreading worldwide, further fueling the shift to mobile. Mobile usage in India alone should grow 400% by 2018. And, of course, there’s the famous statistic that mobile should overtake desktop Internet usage this year. How can we as marketers mess up this opportunity? Two ways. We could position ourselves in perpetual “catch-up” mode and keep spending ad dollars where the public used to be. And we could annoy mobile users with horrid old-school marketing practices. Two-thirds of users told Forrester they think interruptive in-app ads are more annoying than TV ads. Make sure your brand’s social marketing technology platform is delivering a crystal clear picture of your social connections so the mobile touch point is highly relevant, mobile optimized, and delivering real value and satisfying experiences. Otherwise, all we’ve done is find a new way to be unwanted. @mikestiles @oraclesocialPhoto: Kate Mallatratt, freeimages.com

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 28 - November 3, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of Oct 28 - Nov 3, 2012. Eventually, 90% of tech budgets will be outside IT departments | ZDNet Another interesting post from ZDNet blogger Joe McKendrick about changing roles in IT. ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI," reports Oracle ACE Director Andrejus Baranovskis. Mobile Development Platform Strategy Chart: ADF Mobile, WebCenter Sites, Portal, Content and Social "Unlike desktop web focused efforts, the world of mobile has undergone change at a feverish pace," says social enterprise expert John Brunswick. His extensive post charts various resources that will help you keep up. ADF Essentials - The Bare Necessities | Floyd Teter The experiment is over… And now Oracle ACE Director Floyd Teter shares his impressions after spending some time with Oracle ADF Essentials, the free version of Oracle ADF. A review of Oracle SOA Suite 11g Administrator’s Handbook | RedStack "More so than any other single piece of content that I have seen on the topic, it provides the information that a SOA administrator needs to know in order to successfully configure, manage, monitor, troubleshoot and backup an Oracle SOA environment." So says Oracle Fusion Middleware A-Team solution architect Mark Nelson of Oracle SOA Suite 11g Administrator’s Handbook, by Ahmed Aboulnaga and Arun Pareek. Expanding the Oracle Enterprise Repository with functional documentation Capgemini middleware specialist Marc Kuijpers shares information on how Oracle Enterprise Repository can be configured "to contain functional assets, i.e. functional designs, use cases and a logical data model" to aid in SOA governance efforts. Podcast: Are You Future Proof? - Part 2 In Part 2, practicing architects and Oracle ACE Directors Ron Batra (AT&T), Basheer Khan (Innowave Technology), and Ronald van Luttikhuizen discuss re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what’s truly valuable. Easy way to access JPA with REST (JSON / XML) | Edwin Biemond Oracle ACE Edwin Biemond shows you "what is possible with JPA-RS, how easy it is and howto setup your own EclipseLink REST service." Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic: when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. 2012 IOUG Virtualization SIG – Online Symposium on Nov 7 and Nov 8 | Kai Yu Oracle ACE Director Kai Yu shares information on this week's IOUG Virtualization SIG online event. Does that make it a virtual virtualization event? Thought for the Day "If McDonalds were run like a software company, one out of every hundred Big Macs would give you food poisoning — and the response would be, 'We’re sorry, here’s a coupon for two more.'" — Mark Minasi Source: SoftwareQuotes.com

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • SQLAuthority News – Random Thoughts and Random Ideas

    - by pinaldave
    There are days when I keep on wondering about SQL, and even my life overall. Today is Saturday so I decided to write about SQL Server. Just like any other mornings, I woke up at 5 and opened my blog editor. I usually do not open Twitter or Facebook when I am planning to focus on my work, as they are little distractions for me. But today I opened my Twitter account and came across a very interesting quote from a friend: ‘Can I expect you to be different today?’ Well, I think it was very powerful quote for me to read first thing on a new day. This quote froze me for a while and made me think, “Do I really want to write about an SQL Server tip, or something different?”  After a little thinking, I’ve realized that for today I would go on and write something different. I am going to write about a few of the ideas and thoughts I had yesterday. After writing all these, I realized that if I am thinking so much in a day, and if I write a blog post of my random musing of the week or month, it can be so long (and boring). Here are some of my random thoughts I’d like to share with you: When the airplane lands, why does everybody get up and try to rush out when their luggage would be coming probably 20-30 minutes later? I really do not like this question when it was asked to me: “SQL Server is not using optimal index which I just created – how can I force it?” I am not going elaborate on this statement but you are allowed to in the comment section. Why do some people wish Good Morning even when they meet us after 4 PM? Can I optimize a query so much that it gives me a result before I execute it? Is it corruption when someone does their personal household work at office? The lane where I drive is always the slowest lane. Why waste time on correcting others when there are a lot of pending improvements for ourselves? If I have to get Tattoo, which SQL Server Execution Plan symbol should I get? Why do I reach office so early that the coffee machine is yet running its daily cleaning job? Why does every laptop have a ‘Page Up’ key at different locations on the keyboard? While I like color movies, I really appreciate black and white photographs. I do not appreciate statements like, “If I receive your books in PDF, I will spread it to many people to give you much greater exposure. So would you please send them to me ASAP?” Do not tell me, “Why does the database grow back after shrinking it every day?” I suggest you use “Search this blog” for the explanation. Petrol prices are currently at INR 74. I hope the rate remains there. Let me ask you the same question which started my day today:  “Can I expect you to be different today?” Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New JavaScript Editor

    - by Petr
    I did not write a blog post here for a few weeks. I think the last my post was  about releasing NetBeans 7.1 in the beginning of January. The reason is not that I would change the job:), but that I have concentrated on new JavaScript support/editor. The new JavaScript editor is written basically from scratch. The answer for the question "Why from beginning again, why do you just improve the old one?" is not easy and the decision has more aspects. One of the main reasons is that the old support was written 4 years ago and the architecture is limited. Also during the time, the APIs were changed and it was very hard to keep the editor up to date. Also there is a license issue etc. In short, it is time to rewrite the old JS editor.  We build up strong community about the PHP support in NetBeans and because many PHP developers also write JavaScript code I would like to ask you for a help. There is a continual PHP build with the new JavaScript support. You can download the result of the builds here. It's a zip file. You can unzip the file anywhere, where you want. I recommend to run the build with the new userdir, to avoid damaging your current userdir. It shouldn't happened, but just to be sure:). You can achieve this through the switch --userdir. So start the unzipped file from command line from the folder, where you unzipped it, can be done with this command on unix: bin/netbeans.sh --userdir /path/to/new/userdir and on windows: bin\netbeans.exe --userdir D:\path\to\new\userdir For the developers who use continual php build already, it's well known. There is also full IDE build with the new JavaScript support for people, who need more than only PHP support.  Because the builds with the new JavaScript editor is created from a branch, there are not nightly builds available. They will be, when we merge the branch to the trunk, but so far we have to work only with the mentioned continual build. We will merge our branch after branching NetBeans 7.2 from trunk. This is also answer for the question, what release of NetBeans will contain the new JS support. It should be the release after NetBeans 7.2. I'm asking you whether you could play with the builds or better, could work in the builds with new JavaScript support and tell us every issue that you run in. It can be everything what doesn't fit you, something doesn't work as you expected, something is slow, you want change the behaviour of a feature etc. Your input / comments are very important for us and it will help us to achieve the new JavaScript support that you need.  The best way how to communicate issues is through our Bugzilla, because it is simple to track them. Sure you can write comment here:), but still I prefer Bugzilla for any issue. You can click here (you should be already log in Bugzilla), a form for the new JavaScript issue is opened, with pre-filled component Editor and NO72 keyword. I will write about the single features later, but now I will mentioned a few features that should work in better way than in the old support.  Syntactic and semantic colouring Navigator Mark Occurrences and GoTo Declaration  Code Completion Code Completion is invoked through keyboard shortcut CTRL+SPACE. The first invocation offers items that are found through a source model. Almost all editor features are based on the model, that is build from source code. There is a lot of work on the model yet, but it should offer better results. When the pop up window with code completion items is open and you press CTRL+SPACE again, then the code completion offers all elements that are in the project. In the pictures all elements that starts with letter 't'. Formatter with many options and more :) A few features are not still implemented that are supported in the old JavaScript support (for example jQuery support), but we are adding this features ASAP.

    Read the article

  • Moving the Oracle User Experience Forward with the New Release 7 Simplified UI for Oracle Sales Cloud

    - by mvaughan
    By Kathy Miedema, Oracle Applications User ExperienceIn September 2013, Release 7 for Oracle Cloud Applications became generally available for Oracle Sales Cloud and HCM Cloud. This significant release allowed the Oracle Applications User Experience (UX) team to finally talk freely about Simplified UI, a user experience project in the works since Oracle OpenWorld 2012. Simplified UI represents the direction that the Oracle user experience – for all of its enterprise applications – is heading. Oracle’s Apps UX team began by building a Simplified UI for sales representatives. You can find that today in Release 7, and it was demoed extensively during OpenWorld 2013 in San Francisco. This screenshot shows how Opportunities appear in the new Simplified UI for Oracle Sales Cloud, a user interface built for sales reps.Analyst Rebecca Wettemann, vice president of Nucleus Research, saw Simplified UI at Oracle Openworld 2013 and talked about it with CRM Buyer in “Oracle Revs Its Cloud Engines for a Better Customer Experience.” Wettemann said there are distinct themes to the latest release: "One is usability. Oracle Sales Cloud, for example, is designed to have zero training for onboarding sales reps, which it does," she explained. "It is quite impressive, actually -- the intuitive nature of the application and the design work they have done with this goal in mind."The software uses as few buttons and fields as possible, she pointed out. "The sales rep doesn't have to ask, 'what is the next step?' because she can see what it is."In fact, there are three themes driving the usability that Wettemann noted. They are simplicity, mobility, and extensibility, and we write more about them on the Usable Apps web site. These three themes embody the strategy for Oracle’s cloud applications user experiences.  Simplified UI for Oracle Sales CloudIn developing a Simplified UI for Oracle Sales Cloud, Oracle’s UX team concentrated on the tasks that sales reps need to do most frequently, and are most important. “Knowing that the majority of their work lives are spent on the road and on the go, they need to be able to quickly get in and qualify and convert their leads, monitor and progress their opportunities, update their customer and contact information, and manage their schedule,” Jeremy Ashley, Vice President of the Applications UX team, said.Ashley said the Apps UX team has a good reason for creating a Simplified UI that focuses on self-service. “Sales people spend the day selling stuff,” he said. “The only reason they use software is because the company wants to track what they’re doing.” Traditional systems of tracking that information include filling in a spreadsheet of leads or sales. Oracle wants to automate this process for the salesperson, and enable that person to keep everyone who needs to know up-to-date easily and quickly. Simplified UI addresses that problem by providing light-touch input.  “It has to be useful to the salesperson,” Ashley said about the Sales Cloud user experience. Simplified UI can tell sales reps about key opportunities, or provide information about a contact in just a click or two. Customer information is accessible quickly and easily with Simplified UI for the Oracle Sales Cloud.Simplified UI for Sales Cloud can also be extended easily, Ashley said. Users usually just need to add various business fields or create and modify analytical reports. The way that Simplified UI is constructed allows extensibility to happen by hiding or showing a few necessary fields. The Settings user interface, starting in release 7, allows for the simple configuration of the most important visual elements. “With Sales cloud, we identified a need to make the application useful and very simple,” Ashley said. Simplified UI meets that need. Where can you find out more?To find out more about the simplified UI and Oracle’s ongoing investment in applications user experience innovations, come to one of our sessions at a user group conference near you. Stay tuned to the Voice of User Experience (VoX) blog – the next post will be about Simplified UI and HCM Cloud.

    Read the article

  • Adding Output Caching and Expire Header in IIS7 to improve performance

    - by Renso
    The problem: Images and other static files will not be cached unless you tell it to. In IIS7 it is remarkably easy to do this. Web pages are becoming increasingly complex with more scripts, style sheets, images, and Flash on them. A first-time visit to a page may require several HTTP requests to load all the components. By using Expires headers these components become cacheable, which avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often associated with images, but they can and should be used on all page components including scripts, style sheets, and Flash. Every time a page is loaded, every image and other static content like JavaScript files and CSS files will be reloaded on every page request. If the content does not change frequently why not cache it and avoid the network traffic?! The solution: In IIS7 there are two ways to cache content, using the web.config file to set caching for all static content, and in IIS7 itself setting aching by file extension that gives you that extra level of granularity. Web.config: In IIS7, Expires Headers can be enabled in the system.webServer section of the web.config file:   <staticContent>     <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" />   </staticContent> In the above example a cache expiration of 1 day was added. It will be a full day before the content is downloaded from the web server again. To expire the content on a specific date:   <staticContent>     <clientCache cacheControlMode="UseExpires" httpExpires="Sun, 31 Dec 2011 23:59:59 UTC" />   </staticContent> This will expire the content on December 31st 2011 one second before midnight. Issues/Challenges: Once the file has been set to be cached it wont be updated on the user's browser for the set cache expiration. So be careful here with content that may change frequently, like during development. Typically in development you don't want to cache at all for testing purposes. You could also suffix files with timestamp or versions to force a reload into the user's browser cache. IIS7 Expire Web Content Open up your web app in IIS. Open up the sub-folders until you find the folder or file you want to ad an expiration date to. In IIS6 you used to right-click and select properties, no such luck in IIS7, double click HTTP Response. Once the window loads for the HTTP Response Headers, look to the Actions navigation bar to the right, all the way at the top select SET COMMON HEADERS. The Enable HTTP keep-alive will already be pre-selected. Go ahead and add the appropriate expiration header to the file or folder. Note that if you selected a folder, it will apply that setting to all images inside that folder and all nested content, even subfolders. So, two approaches, depending on what level or granularity you need.

    Read the article

  • Internships at Oracle &ndash; a truly multicultural experience!

    - by cristian.condurache(at)oracle.com
    Hello everybody!!! Our names are Lena and Laura, we both study in the same Grande Ecole in France, IPAG and we are about to complete our 16 week-internship in Oracle in the UK. Below a summary of our experience! My name is Lena. I am 20 years old and joined Oracle UK in September 2010 – more specifically, I joined the EMEA Graduate's Recruitment Team (EMEA stands for Europe, Middle East and Africa), and I have learned a lot about working life. It was a really good experience, which made me realize that I soon will be looking for a fulltime employee in a company in less than 3 years. I am glad to have had this first experience in Oracle. First of all because it's a very welcoming company which treats interns as employees and gives them the opportunity to show their potential. I also discovered that it is nice to work in a company where everybody knows everybody, and where the atmosphere is really good. The multicultural aspect is one of the most important and beautiful elements of Oracle. It gives you the opportunity to have contacts in many parts of the world and discover a lot of nice people. During my internship I learned a lot about Recruitment. I discovered I want to work in a Human Resources role after I graduate. I like the contact I will have with candidates and the fact that I have to be in touch with managers and understand their needs. I would be glad to work for the company in the near future. I would like to thank all my team members for welcoming me like they did. It was a real pleasure to share this experience in Oracle and in this team and I hope to return after I graduate.   Hi all! I am Laura. My wish for this internship was to focus on training of personal skills for employees and, by the same time of course, for the company’s development.... and I did it in the OTD team (EMEA Organization Talent Development Team). I could not have done something better than this! It was truly instructive. I learnt how to work in such a big international company, the values and the rules to follow and to interact and be part of the organisation. In Oracle, there are so different aspects of every department, so many possibilities in HR as well as in Finance or Sales... The jobs are very various and the employees’ cultures are also really different thanks to this international and multicultural company. I am working with OTD for the entire EMEA region, having many of my colleagues in other countries, with other cultures, other ways to work, and other ways to think... this is so inspiring! Oracle offers the best environment to learn about a job, as well as to learn about work life in such large companies. This company is about new technologies, it always goes fast, and everything changes quickly! You have to be aware of these changes and keep track of the wishes of customers. For OTD of course, these customers are the employees. Looking back I have learnt more then I would have ever thought and I know that it is what I want to do... And now I hope to come back again! I want to thank all my team for welcoming me and integrating me with such happiness. I will truly miss them!! If you have any questions related to this article feel free to contact [email protected]. You can find our job opportunities via http://campus.oracle.com. Technorati Tags: Oracle,EMEA,Recruitment,internship,ODT,team

    Read the article

  • Using Oracle Linux iSCSI targets with Oracle VM

    - by wim.coekaerts
    A few days ago I had written a blog entry on how to use Oracle Solaris 10 (in my case), ZFS and the iSCSI target feature in Oracle Solaris to create a set of devices exported to my Oracle VM server. Oracle Linux can do this as well and I wanted to make sure I also tried out how to do this on Oracle Linux and here are the results. When you install Oracle Linux 5 update 5 (anything newer than update 3), it comes with an rpm called scsi-target-utils. To begin your quest, should you choose to accept it :) make sure this is installed. rpm -qa |grep scsi-target If it is not installed : up2date scsi-target-utils The target utils come with a tool tgtadm which is similar to iscsitadm on Oracle Solaris. There are 2 components again on the iSCSI server side. (1) create volumes - we will use lvm with lvcreate (2) expose a target using tgtadm. My server has a simple setup. All the disks are part of a single volume group called vgroot. To export a 50Gb volume I just create a new volume : lvcreate -L 50G -nmytest1 vgroot This will show up as a new volume in /dev/mapper as /dev/mapper/vgroot-mytest1. Create as many as you want for your environment. Since I already have my blog entry about the 5 volumes, I am not going to repeat the whole thing. You can just go look at the previous blog entry. Now that we have created the volume, we need to use tgtadm to set it up : make sure the service is running : /etc/init.d/tgtd start or service tgtd start (if you want to keep it running you can do chkconfig tgtd on to start it automatically at boottime) Next you need a targetname to set everything up. My recommendation would be to install iscsi-initiator-utils . This will create an iscsi id and put it in /etc/iscsi/initiatorname.iscsi. For convenience you can do : source /etc/iscsi/initiatorname.iscsi echo $InitiatorName and from here on use $InitiatorName instead of the long complex iqn. create your target : tgtadm --lld iscsi --op new --mode target --tid 1 -T $InitiatorName to show the status : tgtadm --lld iscsi --op show --mode target add the volume previously created : tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/vgroot-mytest1 re-run status to see it's there : tgtadm --lld iscsi --op show --mode target and just like on Oracle Solaris you now have to export (bind) it : tgtadm --lld iscsi --op bind --mode target --tid 1 -I iqn.1986-03.com.sun:01:2a7526f0ffff If you want to export the lun to every iscsi initiator then replace the iqn with ALL. Of course you have to add the iqn of each iscsi initiator or client you want to connect. In the case of my 2 node Oracle VM server setup, both Oracle VM server's initiator names would have to be added. use status again to see that it has this iqn under ACL tgtadm --lld iscsi --op show --mode target You can drop the --lld iscsi if you want, or alias it. It just makes the command line more obvious as to what you are doing. Oracle VM side : Refer back to the previous blog entry for the detailed setup of my Oracle VM server volumes but the exact same commands will be used there. discover : iscsiadm --mode discovery --type sendtargets --portal login : iscsiadm --mode node --targetname iscsi targetname --portal --login get devices : /etc/init.d/iscsi restart and voila you should be in business. have fun.

    Read the article

  • Oracle OpenWorld Update: Demo Pods and Hands-on Labs

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Less than one week away until the start of Oracle OpenWorld 2012 and the Data Integration Solutions team is ready to go!  We have an exciting line up for you this year which we have summarized for you in the Oracle OpenWorld Focus on Data Integration Solutions document. In past posts we have discussed session themes and our customer panel, but today I would like to summarize our Hands-on Labs and Demo Pods that we have available for attendees. For Oracle GoldenGate Hands-On Labs we have two labs that we are running this year. Deep Dive into Oracle GoldenGate Thursday October 4th at 11:15AM in the Marriott Marquis Salon 1/2 Oracle GoldenGate provides real-time log-based change data capture and delivery between heterogeneous systems. It enables cost-effective, low-impact, real-time data integration and continuous availability solutions. This session covers Oracle GoldenGate 11g’s internal product architecture and includes a hands-on lab that covers configuration examples for target database instantiation and real-time change data capture and delivery. The participants will configure Oracle GoldenGate to instantiate a secondary database that can be used for disaster recovery or a reporting instance. Come learn how easy it is to use and how this can be a very valuable and easy technology solution for your organization. Introduction to Oracle GoldenGate Veridata Wednesday October 3rd 10:15AM in the Marriott Marquis Sales 1/2 Oracle GoldenGate Veridata compares one set of data with another and identifies data that is out of synchronization. In this hands-on lab, you will be introduced to the key features of this product. Using the Oracle GoldenGate Veridata Web client, you will have the opportunity to configure comparison objects and rules, initiate a comparison, review the status and output of a comparison, and review out-of-sync data. As a bonus this year, we have recorded the labs and made them available on youtube.com/oraclegoldengate. These will be available the day of the labs. Our demo pods are an opportunity for attendees to see our products but more so to meet the product management and development teams. I would like to point out that we have two Oracle GoldenGate 11gR2 demo pods, one in the database camp and the other in the middleware camp. The one in the middleware camp will be focused on all platforms while the one in the database camp will have a focus on the Oracle platform. The other two I would like to point out are the Monitoring Oracle GoldenGate and the Oracle Enterprise Manager demo pods; both of these pods will focus on methods to monitor GoldenGate but the OEM demo pod will have a specific focus on the Oracle GoldenGate Management Pack plug-in for OEM. Below is a list of our demo pods and their locations. Monitoring Oracle GoldenGate for End-to-End Visibility Moscone South, Right - S-241 Oracle Data Integrator and Oracle GoldenGate for Oracle Applications Moscone South, Right - S-240 Oracle GoldenGate 11gR2 New Features Moscone South, Right - S-239 Oracle GoldenGate 11gR2: Real-Time, Transactional Database Replication     Moscone South, Left - S-027 Oracle GoldenGate Veridata and Adapters Moscone South, Right - S-242 Oracle Enterprise Manager Moscone South, Left - S-040 Keep tuned to our blog during the show for news and highlights from the Data Integration Solutions team. See you there.

    Read the article

  • Getting Started with StreamInsight 2.1

    - by Roman Schindlauer
    If you're just beginning to get familiar with StreamInsight, you may be looking for a way to get started. What are the basics? How can I get my first StreamInsight application running so I can see how it works? Where is the 'front door' that will get me going? If that describes you, then this blog entry might be just what you need. If you're already a StreamInsight wiz, keep reading anyway - you may find some helpful links here that you weren't aware of. But here's what we'd like from you experienced readers in particular: if you know of other good resources that we missed, please feel free to add them in the comments below. We appreciate you sharing your expertise. The Book The basic documentation for StreamInsight is located in the MSDN Library (Microsoft StreamInsight 2.1). You'll notice that previous versions of StreamInsight are still there (1.2 and 2.0), but if you're just getting started you can stick to the 2.1 section. The documentation has been organized to function as reference material, which is fine after you're familiar with the technology. But if you're trying to learn the basics, you might want to take a different path instead of just starting at the top. The following is one map you can use. What Is StreamInsight? Here is a sequence of topics that should give you a good overview of what StreamInsight is and how it works: Overview answers the question, "what is it?" StreamInsight Server Architecture gives you a quick look at a high-level architectural drawing StreamInsight Concepts lays out an overview of the basic components Deploying StreamInsight Entities to a StreamInsight Server describes the mechanics of how these components work together Getting an Example Running Once you have this background, go ahead and install StreamInsight and get a basic example up and running: Installation download and install the software StreamInsight Examples walk through a set of 3 simple StreamInsight applications that work together to demonstrate what you learned in the topics above; you can copy and paste the code into Visual Studio, compile, and run That's it - you now have a real, functioning StreamInsight system! Now that you have a handle on the basics, you might want to start digging deeper. Digging Deeper Here's a suggested path through the documentation to help you understand the next layer of StreamInsight technologies: Using Event Sources and Event Sinks sources supply data and sinks consume it; this topic gives you an overview of how they work Publishing and Connecting to the StreamInsight Server practical details on how to set up a StreamInsight server A Hitchhiker’s Guide to StreamInsight 2.1 Queries queries are the heart of how StreamInsight performs data analytics, and this whitepaper will help you really understand how they work Using StreamInsight LINQ root through this section for technical details on specific query components Using the StreamInsight Event Flow Debugger in addition to troubleshooting, the debugger is a great way to learn more about what goes on inside a StreamInsight application And Even Deeper Finally, to get a handle on some of the more complex things you can do with StreamInsight, dig into these: Input and Output Adapters adapters can be useful for handling more complex sources and sinks Building Resilient StreamInsight Applications a resilient application is able to recover from system failures Operations this section will help you monitor and troubleshoot a running StreamInsight system The StreamInsight Community As you're designing and developing your StreamInsight solutions, you probably will find it helpful to see working examples or to learn tips and tricks from others. Or maybe you need a place to post a vexing question. Here are some community resources that we have found useful. If you know of others, please add them in the comments below. Code samples and tools Official StreamInsight code samples Introduction to LinqPad Driver for StreamInsight 2.1 - LinqPad is a very useful tool for developing queries The following case studies are based on earlier versions of StreamInsight, but they still are useful examples: Microsoft Media Analytics - real-time monitoring and analytic Edgenet - responding to information from multiple source ICONICS - managing energy usage Blogs Microsoft StreamInsight Ruminations of J.net Richard Seroter's Architecture Musings pluralsight Forums MSDN StreamInsight Forum stackoverflow Training Microsoft StreamInsight Fundamentals (“Introducing StreamInsight” is free) from pluralsight Twitter @streaminsight   You’re a StreamInsight Expert That should get you going. Please add any other resources you have found useful in the comments below.   Regards, The StreamInsight Team

    Read the article

  • Windows Phone 7 Series - Tools and Resources

    - by TechTwaddle
    Unless you've been living in the caves of Lascaux for the past couple of days, you probably know what's happening in the world of Windows Phone. Microsoft unveiled the developer tools required to develop applications and games for Windows Phone 7 at MIX10 a couple of days back. Silverlight and XNA being the major frameworks, no big surprise there. And the best news of all is that all the development tools are free! So if you are planning to develop apps for Windows Phone 7, read on. The first place, or more appropriately hub, for you is the Windows Phone Developer Portal. It has most of the information you need to get you started. Now there is a ton of information available at other places too. In this post, I take time to put all the information that I found useful at one place, and I'll keep updating this as and when I find new stuff.   Setting up the development environment 1. Install Windows Phone Developer Tools CTP (Community Technology Preview) This will install Visual Studio 2010 Express, Silverlight, XNA framework and emulator for Windows Phone 7. It also installs a few support tools. 2. Expression Blend 4 for Windows Phone:     - Install Expression Blend 4 beta     - Install Expression Blend Add-in Preview for Windows Phone     - Install Expression Blend SDK Preview for Windows Phone Installing the above tools should set your machine up for development. I installed the tools on my Windows Vista SP1 machine and the process went smoothly without running into any major hitch. Note that the tools won't install on Windows XP, read the release notes of the CTP. Resources and Documentation 1. Microsoft Windows Phone 7 Series Developer Training Kit 2. Programming Windows Phone 7 Series by Charles Petzold. Contains few chapters only. Gives a good preview. 3. MSDN documentation for Windows Phone 7 Development 4. A sample chapter from Learning Windows Phone Programming [PDF] by Yochay Kiriaty and Jaime Rodriguez. Complete book will be available at a later time. 5. Windows Phone 7 Developer Forum - where you can ask questions and problems you run into and the experts are there to help you. 6. For Silverlight visit silverlight.net and for XNA game development, the XNA Creators Club is the place to go, also make sure you follow Michael Klutcher's and Shawn Hargreaves' blog. 7. And finally the MIX'10 website. Most of the sessions will be available for download later (some are already available). Click on the Windows Phone tag to get all the session details and downloads.   If you are completely new to Silverlight and XNA (like me), and C# makes some sense to you then I suggest you go through the Developer Training Kit. It gives a good start and ramps you up pretty quickly.

    Read the article

  • SharePoint 2013 Developer Ramp-Up - Part 1

    Today I had a little spare time during the morning hours and I decided that after checking MVA that I'm going to query the available course material over at Pluralsight. Wow, thanks to fantastic corporations and acquisitions there are lots of courses available. Nicely split by SharePoint version as well as particular interest group. Additionally, I found a couple of online blogs and community sites that I'm going to visit regularly during the next couple of weeks. Today's resource(s) Of course, I'm "all in" for the latest developer resources: SharePoint 2013 Developer Ramp-Up - Part 1 - Understanding the Platform and Developer Experience SharePoint 2013 Developer Ramp-Up - Part 2 SharePoint 2013 Developer Ramp-Up - Part 3 SharePoint 2013 Developer Ramp-Up - Part 4 SharePoint 2013 Developer Ramp-Up - Part 5 SharePoint 2013 Developer Ramp-Up - Part 6 I guess, I'm going to stick to the Pluralsight library until the end of this week. We'll see... Anyway, apart from the video material I came across a couple of other websites which I'd like to list here, too. That's mainly for personal reference instead of bookmarking in the browser, I'll use my own blog for that purpose. Atkinson's SharePoint Blog Düsseldorfer Jung Doerflers SharePoint Blog SharePoint Community Absolute SharePoint The links are in no preferential order and I added them as soon as I found them. Most probably, I'm going to report about specific articles from those resources during this challenge. So, stay tuned and I try to provide more details on certain topics. Takeaway First contact with the 'real stuff' in order to get an idea about software development in Microsoft SharePoint and beyond. Unfortunately and as already expected, the marketing department over at Microsoft seemed to have nothing better to do than to invent new names and baptise literally the same product with every release. Luckily, the release cycles between versions have been three years (roughly) - 2007, 2010, and 2013. Nonetheless, there will be a lot of version-specfic issues to tackle during this learning phase. Especially, when it's about historical expressions like 'WSS'* like I had it yesterday... It's going to be exciting and demanding to catch up with roughly 6-7 years of development and changes. Okay, let's face it. * WSS stands for Windows SharePoint Services 3.0 which forms the 'core engine' of SharePoint 2007. Part 1 of Andrew Connell's series on SharePoint 2013 for developers provides a brief history and overview of the various product names and their relation to the actual SharePoint version. I guess, I might create a cheat-sheet or something comparable in order to reduce the level of confusion while reading through other material: SharePoint 2007 (aka SharePoint v3 aka SharePoint 12) Windows SharePoint Services (WSS) 3.0 Microsoft Office SharePoint Server (MOSS) 2007 .NET Framework 3.0, 32-bit or 64-bit OS SharePoint 2010 (aka SharePoint v4 aka SharePoint 14) Microsoft SharePoint Foundation (SPF) 2010 Microsoft SharePoint Server (SPS) 2010 .NET Framework 3.5 SP1, 64-bit OS only SharePoint 2013 Microsoft SharePoint Foundation (SPF) 2013 Microsoft SharePoint Server (SPS) 2013 .NET Framework 4.5, 64-bit OS only After this quick excursion it is getting more interesting. SharePoint 2013 has a number of Development Practices and Techniques under the hood, and it will be quite a decision process depending on the task requirements to choose the correct path to go. At the moment, the following two options seem to be my future fields of operation: Client-Side Object Model (CSOM) REST API and OData syntax As part of my job assignment, I see myself developing within Visual Studio 2012/2013. Most probably the client development in C# will be using CSOM but of course I'll keep an eye on the REST API, too. JavaScript has quite a momentum since a while and it would a shame to ignore this type of opportunity and possibilities.

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >