Search Results

Search found 19953 results on 799 pages for 'post get'.

Page 174/799 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • That Tool is cURLy

    When you just use IE, Firefox or Chrome it can be easy to forget that HTTP is about more then just going to check the latest tech news at Engadget. It is a full and rich protocol, and a great way to experience that richness is the powerful command line utility cURL. cURL has a lot of options, but the syntax starts out simple. You can retrieve the contents of a web page with a simple curl http://blogs.claritycon.com/. The results should be the full text of the web page, tags and all. From there, you can use X to specify the HTTP verb to use, POST, PUT, DELETE, PATCH, etc and d to specify the payload of a POST or PUT. I have found cURL to be incredibly useful for two scenarios. First, as a good way to test basic web services. Second, while working a bit with CouchDB and another document based database, cURL has helped me learn more about RESTful APIs, including different verbs and response codes. cURL is a mainstay in our environments and programming languages precisely because it is simple, powerful and discoverable. I encourage more .NET developers to take a look, bask on the command line for a while and enjoy the plain text of the web. And this excellent logo:     -- Relevant Links -- Its not always the case with manuals, but the manual for cURL is quite useful: http://curl.haxx.se/docs/manual.html To make your command line look a little nicer (and more powerful) on Windows, check out Console and add some transparency effects: http://sourceforge.net/projects/console/Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ORAchk version 2.2.5 is now available for download

    - by Gerry Haskins
    Those awfully nice ORAchk folks have asked me to let you know about their latest release... ORAchk version 2.2.5 is now available for download, new features in 2.2.5: Running checks for multiple databases in parallel Ability to schedule multiple automated runs via ORAchk daemon New "scratch area" for ORAchk temporary files moved from /tmp to a configurable $HOME directory location System health score calculation now ignores skipped checks Checks the health of pluggable databases using OS authentication New report section to report top 10 time consuming checks to be used for optimizing runtime in the future More readable report output for clusterwide checks Includes over 50 new Health Checks for the Oracle Stack Provides a single dashboard to view collections across your entire enterprise using the Collection Manager, now pre-bundled Expands coverage of pre and post upgrade checks to include standalone databases, with new profile options to run only these checks Expands to additional product areas in E-Business Suite of Workflow & Oracle Purchasing and in Enterprise Manager Cloud Control ORAchk has replaced the popular RACcheck tool, extending the coverage based on prioritization of top issues reported by users, to proactively scan for known problems within the area of: Oracle Database Standalone Database Grid Infrastructure & RAC Maximum Availability Architecture (MAA) Validation Upgrade Readiness Validation Golden Gate Enterprise Manager Cloud Control Repository E-Business Suite Oracle Payables (R12 only) Oracle Workflow Oracle Purchasing (R12 only) Oracle Sun Systems Oracle Solaris ORAchk features: Proactively scans for the most impactful problems across the various layers of your stack Streamlines how to investigate and analyze which known issues present a risk to you Executes lightweight checks in your environment, providing immediate results with no configuration data sent to Oracle Local reporting capability showing specific problems and their resolutions Ability to configure email notifications when problems are detected Provides a single dashboard to view collections across your entire enterprise using the Collection Manager ORAchk will expand in the future with high impact checks in existing and additional product areas. If you have particular checks or product areas you would like to see covered, please post suggestions in the ORAchk subspace in My Oracle Support Community. For more details about ORAchk see Document 1268927.2

    Read the article

  • Execution plan warnings–All that glitters is not gold

    - by Dave Ballantyne
    In a previous post, I showed you the new execution plan warnings related to implicit and explicit warnings.  Pretty much as soon as i hit ’post’,  I noticed something rather odd happening. This statement : select top(10) SalesOrderHeader.SalesOrderID, SalesOrderNumberfrom Sales.SalesOrderHeaderjoin Sales.SalesOrderDetail on SalesOrderHeader.SalesOrderID = SalesOrderDetail.SalesOrderID   Throws the “Type conversion may affect cardinality estimation” warning.     Ive done no such conversion in my statement why would that be ?  Well, SalesOrderNumber is a computed column , “(isnull(N'SO'+CONVERT([nvarchar](23),[SalesOrderID],0),N'*** ERROR ***'))”,  so thats where the conversion is.   Wait!!! Am i saying that every type conversion will throw the warning ?  Thankfully, no.  It only appears for columns that are used in predicates ,even if the predicate / join condition is fine ,  and the column is indexed ( and/or , presumably has statistics).    Hopefully , this wont lead to to many wild goose chases, but is definitely something to bear in mind.  If you want to see this fixed then upvote my connect item here.

    Read the article

  • ISPs with good upload speeds? [closed]

    - by Josh Comley
    I am a web developer, and I spend most of my time not waiting for downloads but waiting for my latest build of a website to publish to a test site. I use the excellent BeyondCompare to ensure I only upload what I need to upload. But still, if a 2MB C# DLL has changed, so be it. And I must wait. At work we host our own servers and have a dedicated line, so for 8.5 hours of my day I have blistering upload speeds across the web, which is nice. At home however, it's a different story. I am with Virgin Media on their XL internet package (I think). I think that means I get 256Kb upload and 20Mb download. So I have began to wonder - are there any ISPs in the UK with good upload speeds? If you know of any for other countries, do post for other's sake, but please specify which country your post is relevant to.

    Read the article

  • Retrofit WebForms with ASP.NET MVC - NoVa Code Camp 2010.2 Demo

    - by Soe Tun
    Thank you to everyone who attended my Retrofit WebForms with ASP.NET MVC session at NoVa Code 2010.2. It was a fun event for me and I hope you had a great time and learned something from it. I wish I had more time to go over some more important topics in more detail. I *promise* I will be writing blog post series about it since I'll have some vacation time during the December holidays to cover some topics that I didn't get to cover in detail.   Please note that the ".bak" file included in the zip file is a SQL Server Database backup file. You have to restore it on your Database server to run it with the source code demo.   Please feel free to ask me about the demo project through Twitter or from this blog post. I'll be glad to help you out. If you want me to give this presentation at your .NET User Group, please let me know and I'll be honored to speak there also.   Again, thank you all and have a great holiday season. Here is the download link to my Demo project Zip file with the PowerPoint presentation in it. Please let me know if the link doesn't work.

    Read the article

  • Log Files from bash script output

    - by neildeadman
    I have a script that runs (this works fine). I'd like to produce logfiles from its output and still show it on screen. I have this command that creates three files from this blog: ((./fk.sh 2>&1 1>&3 | tee errors.log) 3>&1 1>&2 | tee output.log) 2>&1 | tee final.log This does exactly what I want it to. My only issue is that I create files in my script and copy them somewhere, and I'd like to copy these logfiles there too, which I can't do whilst this script is running. I also wanted to make it easier for any user to run my script, so I created another script to run this script. According to this post (see last post) I can put a . before the script name and I can use variables assigned in my called script from the first script if I use them in the first. It doesn't seem to work though and I can't figure out why or find alternative methods. Can anyone help?

    Read the article

  • How to calculate square root in PHP [explained] [on hold]

    - by Enes Imsirovic
    At first code ! Don't forget embed the JQuery ! <html> <head> <title>Simple jQuery and PHP Square Root example</title> <script src="js/jquery-1.10.1.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $('#form').submit(function(){ var number = $('#number').val(); $.ajax({type:"post",url:"calculate.php",data:"number=" +number,success:function(msg){$('#result').hide(); $("#result").html("<h3>" + msg + "</h3>").fadeIn("slow"); } }); return false; }); }); </script> </head> <body> <form id="form" action="calculate.php" method="post"> Enter number: <input id="number" type="text" name="number" /> <input id="submit" type="submit" value="Calculate Square Root" name="submit"/> </form> <p id="result"></p> </body> </html> Second code witch would be connected with first : calculate.php <?php if($_POST['number']==null){ echo "Please Enter a Number"; }else { if (!is_numeric($_POST['number'])) { echo "Please enter only numbers"; }else{ echo "Square Root of " .$_POST['number'] ." is ".sqrt($_POST['number']); } } ?> Chiefly for begginers, to see the power of PHP :) xD Load this on your localhost.. PHP files and JS : https://mega.co.nz/#!Et8zWSBb!KX2PFxa2Pzw_l-wi6QU8xi_eKTlHbtQuBsT_DvXrifk At least it look like this : http://imgur.com/vNnDRQ3

    Read the article

  • What do you wish language designers paid attention to?

    - by Berin Loritsch
    The purpose of this question is not to assemble a laundry list of programming language features that you can't live without, or wish was in your main language of choice. The purpose of this question is to bring to light corners of languge design most language designers might not think about. So, instead of thinking about language feature X, think a little more philisophically. One of my biases, and perhaps it might be controversial, is that the softer side of engineering--the whys and what fors--are many times more important than the more concrete side. For example, Ruby was designed with a stated goal of improving developer happiness. While your opinions may be mixed on whether it delivered or not, the fact that was a goal means that some of the choices in language design were influenced by that philosophy. Please do not post: Syntax flame wars (I could care less whether you use whitespace [Python], keywords [Ruby], or curly braces [Java, C/C++, et. al.] to denote program blocks). That's just an implementation detail. "Any language that doesn't have feature X doesn't deserve to exist" type comments. There is at least one reason for all programming languages to exist--good or bad. Please do post: Philisophical ideas that language designers seem to miss. Technical concepts that seem to be poorly implemented more often than not. Please do provide an example of the pain it causes and if you have any ideas of how you would prefer it to function. Things you wish were in the platform's common library but seldom are. One the same token, things that usually are in a common library that you wish were not. Conceptual features such as built in test/assertion/contract/error handling support that you wish all programming languages would implement properly--and define properly. My hope is that this will be a fun and stimulating topic.

    Read the article

  • Operating System not Found after installing Ubuntu in a Sony Vaio

    - by diego8arock
    I just bought a Sony Vaio SVS15115FLB that came with Windows 7, after enjoying the PC graphic power for a little, I decided it was time to install Ubuntu 12.04. First, I inserted a USB stick, reboot, press F11 but a message saying that no OS was found on the USB, so then I used a live CD. It booted fine and I installed Ubuntu, then when it was time to restart the PC, it didn't boot to GRUB but it went straight to Windows and it began an startup error and was looking for a solution, after it was done, it restarted and then it booted again to Windows and to the same start up error solution thing. I freaked out, so I booted again the Ubuntu live CD, and installed Ubuntu over everything, after it installed I rebooted and then a message appeared saying Operating system Not Found, and I have no idea why. So I Googled again and found this post on Boot Partition, I did everything exactly on that post, but it didn't work (by the way, this was the message): The boot files of [Ubuntu 12.04 LTS] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, >200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. It appeared the first time, then I did it all again and then it was gone. I rebooted and nothing, the same Operating System not found message appeared. So I decided to create a partition for Windows, hoping for something, but the message still appears. I really have no idea what to do, but there is something odd, if I insert the USB stick containing Ubuntu 11.10, the message that says that there is no OS in the PendDrive flashes for a fraction of a second and the boot straight to Ubuntu 12.04 without problems (and booted to Windows when I installed it, ignoring Ubuntu), right now I'm using it like that, but its pretty annoying. Can anyone advise me how to fix this? I'm no expert on this kind of things (boot, GRUB, recovery and stuff like that).

    Read the article

  • HOWTO Catch/Redirect all outgoing e-mails from webapp on Windows Server 2003

    - by John
    As suggested by another member, I have split the original post into two. To see the original post, go to http://serverfault.com/questions/134595/howto-catch-redirect-all-outgoing-e-mails-on-win2k-and-redhat-enterprise. For this question, please keep your answers specific to Windows Server 2003 only. Thanks for the help in advance. Background: I am integrating two separate web application that are developed in ASP .NET and JSP/Struts. As such, they are hosted on two different server technologies, namely Win2K3 and Redhat Enterprise Server 5.5. Problem: There is a copy of production data in my test environment with real e-mail addresses. I need to test the e-mail functionality of these applications, but I do not want them to send out actual e-mails. Is there a way to catch and redirect all outgoing e-mails? Ideally, I would like to send all outgoing e-mails to another e-mail (i.e., [email protected]) so my testers can look at them.

    Read the article

  • Excel wizardness needed - Group By, Sort, Count function help

    - by Chris
    Riddle me this: You have 3 part numbers with the same part name xyz, each with a quantity of 10 items. The items can be picked during the day or week, therefore changing the amount of items on hand. I know I need to use the group by, sort, count and perhaps sumif formulas to have a running count of the number of items on hand at the end of each day (which could be positive or negative). Help? it wont let me add an image because i'm a new user. 'Oops! Your edit couldn't be submitted because: * we're sorry, but as a spam prevention mechanism, new users aren't allowed to post images. Earn more than 10 reputation to post images. '

    Read the article

  • ASP.NET MVC 3 (C#) Software Architecture

    - by ryanzec
    I am starting on a relatively large and ambitious ASP.NET MVC 3 project and just thinking about the best way to organize my code. The project is basically going to be a general management system that will be capable of supporting any type management system whether it be a blogging system, cms, reservation system, wikis, forums, project management system, etc…, each of them being just a separate 'module'. You can read more about it on my blog posted here : http://www.ryanzec.com/index.php/blog/details/8 (forgive me, the style of the site kinda sucks). For those who don't want to read the long blog post the basic idea is that the core system itself is nothing more than a users system with an admin interface to manage the users system. Then you just add on module as you need them and the module I will be creating is a simple blog post to test it out before I move on to the big module which is a project management system. Now I am just trying to think of the best way to structure this so that it is easy for users to add in there own modules but easy for me to update to core system without worrying about the user modifying the core code. I think the ideal way would be to have a number of core projects that user is specifically told not to modify otherwise the system may become unstable and future updates would not work. When the user wants to add in there own modules, they would just add in a new project (or multiple projects). The thing is I am not sure that it is even possible to use multiple projects all with their own controllers, razor view template, css, javascript, etc... in one web application. Ideally each module would have some of it own razor view templates, css, javascript, image files and also need access to some of the core razor view templates, css, javascript, image files which would is in a separate project. It is possible to have 1 web application run off of controllers, razor view templates, css, javascript, image files that are store in multiple projects? Is there a better was to structure this to allow the user to easily add in module with having to modify the core code?

    Read the article

  • wget has a 4 second delay

    - by guisius
    Hello. I have tried to wget a page with windows/mac, and the response is instant while the linux vesion needs to wait for 4 seconds before it shows the response. I just hope this can be solved. More information added: in Ubuntu : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:21:17-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:21:22 (1.88 MB/s) - `test.txt' saved [17] while in Mac OS : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:22:33-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:22:33 (755 KB/s) - `test.txt' saved [17] in ubuntu it delays 4 seconds while windows and mac will not i believe it may related to some setting in the network config such as packet size , window frame , but i have no idea to set this PS: because the limit of the post not allow to post the url so i mark this as xxx

    Read the article

  • How to write PowerShell code part 2 (Using function)

    - by ybbest
    In the last post, I have showed you how to use external configuration file in your PowerShell script. In this post, I will show you how to create PowerShell function and call external PowerShell script.You can download the script here. 1. In the original script, I create the site directly using New-SPSite command. I will refactor it so that I will create a new function to create the site using New-SPSite. The PowerShell function is quite similar to a C# method. You put your function parameters in () and separate each parameter by a comma (,). Then you put your method body in {}. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } 2. The difference is you do not need semi-colon (;) at the end of each statement and when calling the method you do not need comma (,) to separate each parameter. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } #Calling the function [int] $num1=3 [int] $num2=4 $d= add $num1 $num2 Write-Host $d 3. If you like to return anything from the function, you just need to type in the object you like to return, not need to type return .e.g. $ObjectToReturn not return $ObjectToReturn

    Read the article

  • SharePoint Development: Making that application pool recycle less annoying

    - by Sahil Malik
    SharePoint 2010 Training: more information If you’re like me, you’re easily distracted. What was I talking about again? Oh yes! See, whenever I am writing a farm solution that requires an application pool recycle, I hit CTRL_SHIFT_B to deploy (how to remap CTRL_SHIFT_B to deploy?).Now, I have what, a good 15-20 seconds to goof off and check my gmail/facebook/twitter/IM conversations, right? Sure .. 30 minutes later .. my application pool has died and recycled 3 times on it’s own. Hmm. Clearly, this was becoming a serious issue. So what am I supposed to do here? Well, worry not! Here is what you need to do, Right click\Properties on your SharePoint project, and look for the “SharePoint” tab. Look for post-deployment command line, and add the following post-deployment command line. (Be careful, copy paste exactly as is).   Read full article ....

    Read the article

  • What is the "default" software license?

    - by Tesserex
    If I release some code and binaries, but I don't include any license at all with it, what are the legal terms that apply by default (in the US, where I am). I know that I automatically have copyright without doing anything, but what restrictions are there on it? If I upload my code to github and announce it as a free download / contribute at will, then are people allowed to modify and close source my work? I haven't said that they cannot, as a GPL would, but I don't feel that it would by default be acceptable to steal my work either. So what can and cannot people do with code that is freely available, but has absolutely no licensing terms attached? By the way, I know that it would be a good idea for me to pick a license and apply it to my code soon, but I'm still curious about this. Edit Thanks! So it looks like the consensus is that it starts out very restricted, and then my actions imply any further rights. If I just put software on my website with no security, it would be an infringement to download it. If I post a link to that download on a forum, then that would implicitly give permission to use it for free, but not distribute it or its derivatives (but you can modify it for your own use). If I put it on GitHub, then it is conveyed as FOSS. Again, this is probably not codified exactly in law but may be enough to be defensible in court. It's still a good idea to post a complete license to be safe.

    Read the article

  • Switching mdadm to an external bitmap

    - by Oli
    I've just read this in another post about improving RAID5/6 write speeds: After increasing stripe cache & switching to external bitmap, my speeds are 160 Mb/s writes, 260 Mb/s reads. :-D I've already found out how to increase the stripe cache and this worked pretty well but I'd like to know more about an external bitmap. I have an incredibly fast (540MB/s) RAID0 SSD that would do well if a bitmap does what I think it does but I'm still very unsure. I've only known about them as long as I've known this post. A few questions: What is a bitmap (in terms of mdadm)? What are the advantages of an internal bitmap (over external)? What are the advantages of an external bitmap (over internal)? How do I switch between the two? I should add that while this is a I'm-bored-let's-break-something thread, I do value the data stored on the RAID array. If doing this is going to put data at significant risk, please let me know.

    Read the article

  • Updated Virtual Machine for VS/TFS 2010

    - by Enrique Lima
    If you had downloaded the previous version of the virtual machines, then you are likely aware they are set to expire soon (12/15/2010). Brian Keller announced yesterday (blog post here) the availability of a vm refresh (new expiration set for 6/1/2011). What is part of the refresh? Here is the excerpt from Brian’s post: “ The version of this virtual machine which was refreshed on December 9, 2010, includes the following additions: · Visual Studio 2010 Feature Pack 2 · Team Foundation Server 2010 Power Tools (September 2010 Release) · Visual Studio 2010 Productivity Power Tools (these are disabled in VS so that the screenshots of the hands-on-labs still match; you can quickly enable the Productivity Power Tools via Tools -> Extension Manager from within Visual Studio) · Test Scribe for Microsoft Test Manager · Visual Studio Scrum 1.0 Process Template · All Windows Updates through December 8, 2010 · Lab Management GDR (KB983578) · Visual Studio 2010 Feature Pack 2 pre-requisite hotfix (KB2403277) · Microsoft Test Manager hotfix (KB2387011) · Minor fit-and-finish fixes based on customer feedback · A new expiration date of June 1, 2011” The links to download the Virtual Machines are: Hyper-V: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e0198b64-4acb-4709-b07f-359fb4d523bc&displaylang=en Windows Virtual PC (Win 7): http://www.microsoft.com/downloads/en/details.aspx?FamilyID=509c3ba1-4efc-42b5-b6d8-0232b2cbb26e&displaylang=en

    Read the article

  • Still Alive&hellip;

    - by MOSSLover
    As Glados would say at the end of Portal “I’m still alive…”.  I am around, but I’m just not posting as frequently as I should.  I am trying to get acclimated to my new job, planning SharePoint Saturday New York City and Women in SharePoint plus trying to lead a normal life doing normal chores and hang out with my boyfriend.  What does this mean?  Well I’m trying to cut back to one or two events a month, which will include Heartland Developer Conference, Best Practices Conference, SPS Ozarks, SPS NYC (not speaking, running), and maybe SPS Denver and/or SPS East Bay.  So with the new job acclimation the blog suffers and twitter is getting less loven.  I’m only posting on twitter at night.  I will try to blog when I can as I see more 2010 and 2007 things that I find interesting to share.  I guess when you are a new employee you try to figure out what’s going on the first few months.  It’s really hard to post on SharePoint issue while that happens.  I’m really sorry guys and I will try harder to post at least a couple times a month (and maybe moderate comments  slightly better).  I hope that you all have a good weekend.

    Read the article

  • Commands in Task-It - Part 1

    Download Source Code NOTE: To run the source code provided your will need to update to the RC (release candidate) versions of Silverlight 4 and VisualStudio 2010. In recent blog posts, like my MVVM post, I used Commands to invoke actions, like Saving a record. In this rather simplistic sample I will talk about the basics of Commands, and in my next post will get deeper into it. What is a Command? I remember the first time a UI designer used the word "command" I wasn't really sure what she was referring to. I later realized that it is just a term that is used to represent some UI control that can invoke an action, like a Button, HyperlinkButton, RadMenuItem, RadRadioButton, etc. Why should we use Commands? I'm sure you're familiar with the code behind approach of handling events. For example, if you had a Button and a RadMenuItem that ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The last word on C++ AMP...

    - by Daniel Moth
    Well, not the last word, but the last blog post I plan to do here on that topic. Over the last 12 months, I have published 45 blog posts related to C++ AMP on the Parallel Programming in Native Code, and the rest of the team has published even more. Occasionally I'll link to some of them from my own blog here, but today I decided to stop doing that - so if you relied on my personal blog pointing you to C++ AMP content, it is time you subscribed to the msdn blog. I will continue to blog about other topics here of course, so stay tuned. So, for the last time, I encourage you to read the latest two blog posts I published on the team blog bringing together essential reading material on C++ AMP Learn C++ AMP - a collection of links to take you from zero to hero. Present on C++ AMP - a walkthrough on how to give a presentation including slides. Got questions on C++ AMP? Hit the msdn forum! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • ArchBeat Link-o-Rama for October 23, 2013

    - by OTN ArchBeat
    Virtual Dev Day: Oracle ADF Development - Web, Mobile, and Beyond This free virtual event includes technical sessions that range from introductory to deep dive, covering Oracle ADF and Oracle ADF Mobile. Multiple tracks cover every interest and every level and include live online Q&A for answers to your technical questions. Register now! Americas: Tuesday, November 19, 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT APAC: Thursday, November 21, 10am–1:30pm IST (India) / 12:30pm–4pm SGT (Singapore) / 3:30pm–7pm AESDT EMEA: Tuesday, November 26, 9am-1pm GMT / 1pm-5pm GST/ 2:30pm-6:30pm IST A Roadmap for SOA Development and Delivery | Mark Nelson Do you know the way to S-O-A? Mark Nelson does. His latest blog post, part of an ongoing series, will help to keep you from getting lost along the way. Updated ODI Statement of Direction | Robert Schweighardt Heads up Oracle Data Integrator fans! A new product statement of direction document is available, offering "an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB)." Java-Powered Robot Named NAO Wows Crowds | Tori Wieldt Java community manager Tori Wieldt interviews a robot and human. Nordic OTN Tour 2013 | Lonneke Dikmans Oracle ACE Director Lonneke Dikmans checks in from the Stockholm leg of the Nordic OTN Tour for 2013, sponsored by the Danish Oracle User Group and featuring fellow ACE Directors Tim Hall and Sten Vesterli, plus local speakers at various stops. Lonneke's post include the slides from three of the presentations. Thought for the Day "Some people approach every problem with an open mouth." — Adlai E. Stevenson23rd Vice President of the United States (October 23, 1835 – June 14, 1914) Source: brainyquote.com

    Read the article

  • My computer will not reboot after fresh install of ubuntu 12.04LTS

    - by user170715
    I bought a new computer yesterday and it came with Windows 8. When installing Ubuntu, i choose the erase and install option thinking that Ubuntu would install easily like it did for my old laptop... After a successful install and following the instructions telling me to reboot to finish installation and remove installation media. It worked and my computer booted fine, however once I began installing updates via update manager and activating additional driver {ATI/AMD proprietary FGLRX graphics driver (post-release updates)} out of the following: Experimental AMD binary Xorg driver and kernel module ATI/AMD proprietary FGLRX graphics driver (*experimental*beta) ATI/AMD proprietary FGLRX graphics driver (post-release updates) Then reboot to finish making changes I reboot and get an error (Reboot and select proper boot device) At this point I was stuck, so I eventually reinstalled ubuntu and repeated the exacted same steps until right before i rebooted to finish making changes. However this time i used this Boot Repair tool sudo add-apt-repository ppa:yannubuntu/boot-repair sudo apt-get update sudo apt-get install -y boot-repair boot-repair After running the program i get a "boot successfully repaired" message. Then I try to reboot again and get the GNU Grub screen where it says would you like to boot: normal recovery memorytest Once it begins loading, you see the code moving across the screen then it pauses when it gets to and doesnt do anything. If someone could tell me how to fix this or get Windows 8 back soon, I'd appreciate it because like i said i just bought it yesterday and now i cant even use it.

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >