Search Results

Search found 19966 results on 799 pages for 'wild thing'.

Page 525/799 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Paypal PDT and IPN , how does it work?

    - by slow diver
    PDT Payment Data Transfer is getting the transaction data of the purchase that was made on paypal site and you want to fetch that on your own site and display to the user. Also you may want to store it in your database for archive and tracking purposes. But I cannot exactly follow the documentation here What I am not getting is Once you have activated PDT, every time a buyer makes a website payment and is redirected to your return URL, a transaction token will be passed along as a "GET" variable to this return URL. In order to properly use PDT and display transaction details to your customer, you should fetch the transaction token, variable name "tx", and retreive transaction details from PayPal by constructing an HTTP POST to PayPal. Your POST should be sent to https://www.paypal.com/cgi-bin/webscr. You must post the transaction token using the variable "tx" and the value of the transaction token previously received (e.g. "tx=transaction_token"), and the special identity token using the variable at and the value of your PDT identity token (e.g. "at=identity_token"). You will also need to append a variable named "cmd" with the value "_notify-synch", for example "cmd=_notify-synch", to the POST string. IPN I have setup Instant Payment Notification through setting according to this documentation. This is basically logging into your paypal account and enable IPN while specifying a url where the notification will be sent. This is used to complete an order so that the product can be shipped. What I did is setup a PHP page. I have created a table and whenever that page is called (or hit), it registers an entry in the table so I know a notification came from Paypal. But it does not work either. What am I really doing wrong? The first thing I want to trouble shoot though is when the buyer pays the amount, he is automatically redirected to my site. I have enabled this but automatic redirection just does not work. Instead he is shown the url as an option after payment confirmation is shown. Can someone guide my how the PDT process goes? Where do I make the request for PDT, is it along the very first request (Buy Now button) or it is sent later? Addition I found some good sampling code of how everything should work but it still does not work. I use this code http://officetrio.com/modules/free-php-paypal-ipn-script.php for IPN. I am using this for PDT. This one uses SSL, I changed SSL to regular HTTP (copied paypal version), still does not work. http://ykyuen.wordpress.com/2010/02/17/paypal-payment-data-transfer-sample-code/

    Read the article

  • Connect Team Foundation Service/TFS 2012 with Visual Studio 2010 &amp; Visual Studio 2008

    - by Vishal
    Hello, Microsoft finally released the Team Foundation Service in late October 2012 after its long time in the preview phase. I was already using the TFS Preview which was free but I was happy to see Microsoft releasing the Team Foundation Service also FREE for upto 5 users. Isn't that great news? I know there are bunch of other free source control repositories (Github, Bitbucket, SVN etc.) out there but I somehow like TFS better. Also the other good thing about the final release was that I didn’t had to do any kind of migration of my code from preview to final release version. Just changed the TFS connection URL and it worked like a charm. Anyways, if you are a startup with small team and need some awesome Source Control along with all the good Project Management, Continuous Integration (Build, Test, Deploy), Team Collaboration, Agile/Scrum planning etc. features than Team Foundation Service is your answer. Microsoft has not yet released their pricing for more than 5 users and will be releasing it sometime in early 2013. What if as of now you have a team more than 5 users and you want to use Team Foundation Service, the good news is you can use it for FREE but when they release the final pricing, you will have to transition to the paid plan. Lot of story, getting to the point, connecting to Team Foundation Service with Visual Studio 2012 is straight forward and would work out of the box but it wont for previous versions of Visual Studio. You will have to upgrade to the latest service pack first and than install the forward compatibility pack. (1st : Service Packs & 2nd: Forward Compatibility packs) For Visual Studio 2010: Visual Studio 2010 Service Pack 1. Visual Studio 2010 forward compatibility for TFS 2012 and Team Foundation Service.         For Visual Studio 2008: Visual Studio 2008 Service Pack 1. Visual Studio 2008 forward compatibility for TFS 2012 & Team Foundation Service. Restart your system. Visual Studio 2008 will not work if you only put https://xxx.visualstudio.com. You will have to put your collection name too as shown below.       By the way, it doesn’t matter if you are an Apple Application Developer or Android App Developer, you can still use Team Foundation Service as your source control. Below are few links to connect to Team Foundation Service with other IDEs: Connect Eclipse to Team Foundation Service. Connect XCode to Team Foundation Service. Happy coding. Vishal Mody

    Read the article

  • Choose Custom New Tab Pages in Chrome

    - by Asian Angel
    For most people the default New Tab Page in Chrome works perfectly well for their purposes. But if you would prefer to choose what opens in a new tab for yourself then you will definitely want to have a look at the “Define your own new tab!” extension for Google Chrome. Before Unless you are using a Speed Dial (or similar) extension each time you click on the “New Tab Button” you get the same old page. It would certainly be a lot more satisfying if you could choose custom webpage(s) to open as new tabs. After Once you have the extension installed the best thing to do is click on the “New Tab Button”. That will open up the “Options Page” where you can enter one or two custom website URLs of your choosing. Once you have your custom URLs entered click on “Save”. As soon as you click on the “New Tab Button” your new custom webpage(s) will open. If you chose two webpages the first choice will open focused on the “right side” instead of the “left”. Clicking on the “Home Button” will also open the webpage(s) that you chose. The webpage(s) that you chose will also open as your starting “Home Pages” each time that you start your browser. Conclusion If you have wanted to choose your own custom “New Tab Page” then this is the extension that you have been waiting for. Links Download the Define your own new tab! extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Find Similar Websites in Google ChromeAccess Google Chrome’s Special Pages the Easy WayEnable Vista Black Style Theme for Google Chrome in XPSet Custom Reload Times for Individual Webpages in ChromeEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • How do I set up nvidia graphics adapter to put out 1080p, it seems to be using interlace mode>

    - by keepitsimpleengineer
    After upgrading to 12.04, my mythbuntu client/server seems to be running in 1080i, the clue comes from: [ 1176.117] (II) NVIDIA(0): Setting mode "1920x1080_60i" [ 1231.340] (II) NVIDIA(0): Setting mode "DFP-1:1920x1080_60@1920x1080+0+0" This is from Xorg.0.log. This whole thing started from video tearing when watching Mythtv recordings. It didn't happen in 10.10. Should I use "TVStandard" "HD1080p" in the screen section since this is a dedicated HTPC? It only connects to an HDTV (1080p) via hdmi. Here is the current xorg.conf file: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Keyboard0" "CoreKeyboard" # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" FontPath "unix/:7100" EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Mouse0" # Driver "mouse" # Option "Protocol" "auto" # Option "Device" "/dev/psaux" # Option "Emulate3Buttons" "no" # Option "ZAxisMapping" "4 5" #EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Keyboard0" # Driver "kbd" #EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "SAMSUNG" HorizSync 26.0 - 81.0 VertRefresh 24.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 240" Option "TripleBuffer" "1" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection After a little digging, the question changes slightly, to wit... Per Chapter 19 of nvidia README... "If the EDID for the display device reported a preferred mode timing, and that mode timing is considered a valid mode, then that mode is used as the "nvidia-auto-select" mode." The EDID for my HDMI connected LCD monitor says use first device as preferred. Prefer first detailed timing : Yes Also: (--) NVIDIA(0): EDID maximum pixel clock : 230.0 MHz The list: (from startx -- -verbose 6 ) (--) NVIDIA(0): Detailed Timings: (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 148.50 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1089, 1125 (--) NVIDIA(0): H/V Polarity : +/+ This is the actual mode selected: (from xorg.0.log) (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 74.18 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1094, 1124 (--) NVIDIA(0): H/V Polarity : +/+ (--) NVIDIA(0): Extra : Interlaced (--) NVIDIA(0): CEA Format : 5 So my HTPC is down-converting to 1080i and then the Monitor is up-converting to 1080p How can I fix this, please?

    Read the article

  • Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

    - by DeveloperDon
    In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming. Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code. Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements. In my reading, I have seen examples of multi-process and multi-thread programming that don't seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use? For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment. th_id = omp_get_thread_num(); #pragma omp critical { cout << "Hello World from thread " << th_id << '\n'; } This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this: wait(sem); th_id = get_thread_num(); cout << "Hello World from thread " << th_id << '\n'; signal(sem); In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage. Do these systems like OpenMP eliminate the problems with semaphores? Do they move the problem somewhere else? How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • 25 reasons to attend JavaOne 2012

    - by arungupta
    17th JavaOne is just around the corner, less than 3 weeks away! If you are still thinking about registering for the conference, here are my top 25 reasons to attend the conference: Biggest gathering of Java geeks in the world Latest and greatest content with 475 technical sessions/Birds of Feathers/Hands-on labs sessions (about 20% more from last year) Reduced number of keynotes to accommodate room for more technical content No product pitches, exclusive focus on technology (I can tell you that from my experience as a track lead) Sessions are divided in different in-depth technical tracks to focus on Java technology that most interests you Reruns of several popular sessions Experts and Practitioners-led HOLs and tutorials Rock star speakers, panelists, faculties, and instructors. Meet several Java Champions and JUG leaders from all around the world Engage with speakers and discuss with fellow developers in a casual setting with lots of networking space A complete conference dedicated for Java Embedded Extensive and fast-paced hands-on University Sessions on Sunday, learn while you are at the conference. You can register for Java University only or attend with the conference. Dukes Choice Awards recognize and celebrate the most innovative usage of the Java platform DEMOgrounds and Exhibition Hall provide extensive opportunities for networking and engagement with the biggest names in Java (dedicated hours on each day as well) Dedicated day for Java User Groups and Communities (GlassFish Community Event and NetBeans Community Day) Multiple registration packages to meet your needs Pay for 4 full conference passes and get a fifth one free Students and Bloggers get a free pass Geek Bike Ride with fellow speakers and attendees in a casual setting Greenest conference on the plane Enjoy different cuisines in the San Francisco city, take a trip to Alcatraz or Napa Valley or go running on the crooked street ;-) There are tons of tourist opportunities in/around San Francisco. Tons of parties during the conference, in the evening, late night, and early mornings. Don't forget Thirsty Bear Party! Pearl Jam and Kings of Leon at Appreciation Party Oracle Music Festival at Yerba Buena Gardens Grab the bragging rights "I have attended JavaOne"! Learn a new skill, build new connections, conceive a new idea and push the boundaries of Java in the most important educational and networking event of the year for Java developers and enthusiasts. With so much geekgasm going on during the 5 days of JavaOne, is there a reason for you to wait ? Register for the conference now! Grab your buttons, banners, and other collateral at JavaOne Toolkit. You can also send an email to [email protected]. And reach out to us using different social media channels ... As a 13 year veteran of the conference, I can tell this is some thing every Java developer must experience! I will be there, will you ?

    Read the article

  • Rewriting code under BSD license

    - by Frank
    I am currently studding OpengGL with OpenGL Supebible 5th edition. I've found interested for me some C++ code that is distributed with the book (see also on google code). That code is under New BSD License. I am writing my software on C# with SharpGL wrapper and I'd like to know following things: Can I rewrite that C++ to C#? edid: I'am interesting in using such things like GLBatch, GLShaderManager and some other thing from GLTools. Problem is that library is on C++, but I use C#. How do I have to mark my source code if I put it somewhere like to my github account? What disclaimer should be? Original disclaimer looks like: /* GLShaderManager.h Copyright (c) 2009, Richard S. Wright Jr. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Richard S. Wright Jr. nor the names of other contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ Edit: Should my copyright looks like after rewriting something like that? Copyright (c) 2014, My Name Copyright (c) 2009, Richard S. Wright Jr. All rights reserved. Redistribution...................

    Read the article

  • Becoming a "maintenance developer"

    - by anon
    So I've kind of been getting angry about the current position I'm in, and I'd love to get other developers' input on this. I've been at my current place of employment for about 11 months now. When I began, I was working on all new features. I basically worked on an entire new web project for the first 5-6 months I was here. After this, I was moved to more of a service oriented role (which was still great, all new stuff for me), and I was in this role for about the past 5-6 months. Here's where the problem comes in. Basically, a couple of days ago I was made the support/maintenance guy. Now, we have an IT support team, so I'm not talking that kind of support, I'm talking more of a second level support guy (when the guys on the surface can't really get to the root of the issue), coupled with working on maintenance issues that have been lingering in the backlog for a while. To me, a developer with about 3 years of experience, this is kind of disheartening. With the type of work place this is, I wouldn't be surprised if these support issues take up most of my days, and I barely make it to working on maintenance issues. Also, most of these support issues aren't even related to code, they are more or less just knowing the system architecture, working with making sure services are running/getting started properly, handling/fixing bad data, etc. I'm a developer, so this part sucks. Also, even when I do have time to work maintenance, these are basically just bug fixes/improving bad code, so this sucks as well, however at least it's related to coding. Am I wrong for getting angry here? I don't want to really complain about it, but to be honest, I wasn't spoken to about this or anything, I was kind of just sent an e-mail letting me know I'm the guy for this type of thing, and that was that. The entire team took a few minutes to give me their "that sucks" talk, because they know how annoying it is to be on support for the type of work we do, so I know I'm not the only guy that knows it's not that great of an opportunity. I'm just kind of on the fence about how to move forward. Obviously I'm just going to continue working for the time being, no point making a bad impression on anybody, but I'd like to know how you guys would approach this situation, or how you think I should be feeling about it/how you guys would feel. Thanks guys.

    Read the article

  • NFS server generating "invalid extent" on EXT4 system disk?

    - by Stephen Winnall
    I have a server running Xen 4.1 with Oneiric in the dom0 and each of the 4 domUs. The system disks of the domUs are LVM2 volumes built on top of an mdadm RAID1. All the domU system disks are EXT4 and are created using snapshots of the same original template. 3 of them run perfectly, but one (called s-ub-02) keeps on being remounted read-only. A subsequent e2fsck results in a single "invalid extent" diagnosis: e2fsck 1.41.14 (22-Dec-2010) /dev/domu/s-ub-02-root contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Inode 525418 has an invalid extent (logical block 8959, invalid physical block 0, len 0) Clear<y>? yes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/domu/s-ub-02-root: 77757/655360 files (0.3% non-contiguous), 360592/2621440 blocks The console shows typically the following errors for the system disk (xvda2): [101980.903416] EXT4-fs error (device xvda2): ext4_ext_find_extent:732: inode #525418: comm apt-get: bad header/extent: invalid extent entries - magic f30a, entries 12, max 340(340), depth 0(0) [101980.903473] EXT4-fs (xvda2): Remounting filesystem read-only I have created new versions of the system disk. The same thing always happens. This, and the fact that the disk is ultimately on a RAID1, leads me to preclude a hardware disk error. The only obvious distinguishing feature of this domU is the presence of nfs-kernel-server, so I suspect that. Its exports file looks like this: /exports/users 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/music 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/pictures 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/opt 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/users and /exports/opt are LVM2 volumes from the same volume group as the system disk. /exports/media is an EXT2 volume. (There is an issue where clients see /exports/media/pictures as being a read-only volume, which I mention for completeness.) With the exception of the read-only problem, the NFS server appears to work correctly under light load for several hours before the "invalid extent" problem occurs. There are no helpful entries in /var/log. All of a sudden, no more files are written, so you can see when the disk was remounted read-only, but there is no indication of what the cause might be. Can anyone help me with this problem? Steve

    Read the article

  • Cannot boot into system after deleting partition

    - by Clayton
    Okay...so this was kind of a stupid thing for me to do now that I think back on it. I was experiencing a ton of lag and not as much memory that I could use after installing Ubuntu 12.04. So after remembering I had installed multiple server versions of Ubuntu 12.04 by mistake, I went into Disk Management and proceeded to delete each and every one. Everything went fine. Up until this week, I have not experienced any problems. But starting yesterday I began to get lag just as I had before, and nothing fixed the problem. I decided to remove the Ubuntu partition, since I was also experiencing a visual error when given the option to select one to boot(the screen doesn't come up at all, and I recieved a monitor resolution error instead, but could still access both Windows and Ubuntu via arrow keys). After deleting the Ubuntu partition, so that I could see if running just Windows would fix the problem, I proceeded with what I was doing, installing a few programs that were not tied to my prediciment in any way. Upon rebooting my desktop, however, I recieved the following error: error: unknown filesystem. grub rescue> Hoping I could boot into Ubuntu via a pendrive and possibly backup my important files and wipe the hard drive to start fresh, I installed Ubuntu 13.04, but even that does not boot. Instead, I get this message on a terminal screen: SYSLINUX 4.06. EDD 4.06-pre1 Copyright (C) 1994-2011 H. Peter Anvin et al ERROR: No configuration file found No DEFAULT or UI configuration directive found! boot: So more or less, my desktop is screwed. I need to be able to get to the files inside because of my job as an artist, as well as retrieve my documents for my stories stored on Windows. Once I can succeed in solving this once and for all, I know for a fact I will stick to Ubuntu only, and install what is required to be able to run any Windows applications I used to use or need to use. I would rather not reformat the hard drive, and if I need to, it is a last resort. And I doubt I can use a Windows Recovery Disk to get my files back, as my mom has thrown out a lot of the installation disks and paperwork I would need to even follow through with that. :\ Keep in mind that I am a novice/newbie when it comes to Linux, but am hoping ot become better at it as time goes by. I appreciate any help you guys can give me. This will probably be the last time I attempt to do anything that could risk the well-being of my PC. (I've also looked through various questions on the site and tested a lot of the solutions. None seem to have worked.)

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • viable part-time career in IT/programming?

    - by Rider
    Hi, I'd like to ask for some career advice from you people. Is there a viable job/career that can be done in programming/IT for the long term? Right now, I am thinking about website (PHP?) developer path. My background: I have a degree in computer science and have been a programmer/system analyst for almost 10 years. Lately I took a big break from programming and studied for a B.arch. degree (yes architecture), only to discover that architecture offers zero (0) jobs where I'm from, for 3 years already (and no, I am not going to move and the grass in not greener in other places). I have never been particularly interested in programming, in fact I was bored by it. But I was always quite good at both programming and system analysis, and very valued by practically all my employers. On the other hand, I have never been valued or offered a good job in any other field (although I can do many things, like design, architecture, translations, documentation, teaching, etc etc.) I guess the human component has been always more important for me in programming jobs - I value all the good people I worked with, but not projects. However, I have about zero skills or desire to be a project manager. I also have close to zero skills for selling myself. I like it best when I can do "my thing", have my niche, have an ownership of some project. Right now my career perspective is to do part time programming and to part time teach yoga. I have already started the yoga teaching part. Do you think that part time programming is viable? And what niche works best for that? I have considered web development, QA, or software development in a company like I did before. However, my fear is that when you do programming part-time, you get the most boring coding work, only to see your colleagues move to more interesting projects and up their respective career ladders. I also fear that part-timers are not especially needed either. And, since I don't share much enthusiasm at programming, I'd rather not be around young programmers boiling with geeky enthusiasm about coding, but rather QA mindset with people from different backgrounds and life paths might work better for me. Thanks for any advice, --Rider

    Read the article

  • ArchBeat Facebook Friday: Top 10 Posts - August 15-21, 2014

    - by Bob Rhubart-Oracle
    As hot as molten rock? Not quite. But among the 5,313 fans of the OTN ArchBeat Facebook Page these Top 10 items were the hottest over the past seven days, August 15-21, 2014. Oracle BPM 12c Gateways (Part 1 of 5): Exclusive Gateway | Antonis Antoniou Oracle ACE Associate Antonis Antoniou begins a five-part series with a look at In the gateway control flow components in Oracle BPM and how they can be used to process flow. Slicing the EDG: Different SOA Domain Configurations | Antony Reynolda Antony Reynolds introduces three different configurations for a SOA environment and identifies some of the advantages for each. How to introduce DevOps into a moribund corporate culture | ZDNet Confused about DevOPs? This post from ZDNet's Joe McKendrick -- which includes insight from Phil Whelan -- just might clear some of the fog. Oracle Identity Manager Role Management With API | Mustafa Kaya Mustafa Kaya shares some examples of role management using the Oracle Identity Management API. Podcast: Redefining Information Management Architecture Oracle Enterprise Architect Andrew Bond joins Oracle ACE Directors Mark Rittman and Stewart Bryson for a conversation about their collaboration on a new Oracle Information Management Reference Architecture. WebCenter Sites Demo Integration with Endeca Guided Search | Micheal Sullivan A-Team solution architect Michael Sullivan shares the details on a demo that illustrates the viability of integrating WebCenter Sites with Oracle Endeca. Wearables in the world of enterprise applications? Yep. Oh yeah, wearables are a THING. Here's a look at how the Oracle Applications User Experience team has been researching wearables for inclusion in your future enterprise applications. Getting Started With The Coherence Memcached Adaptor | David Felcey Let David Felcey show you how to configure the Coherence Memcached Adaptor, and take advantage of his simple PHP example that demonstrates how Memecached clients can connect to a Coherence cluster. OTN Architect Community Newsletter - August Edition A month's worth of hot stuff, all in one spot. Featuring articles on Java, Coherence, WebLogic, Mobile and much more. 8,853 Conversations About Oracle WebLogic Do you have a question about WebLogic? Do you have an answer to a question about WebLogic? You need to be here.

    Read the article

  • Content Based Routing with BRE and ESB

    - by Christopher House
    I've been working with BizTalk 2009 and the ESB toolkit for the past couple of days.  This is actually my first exposure to ESB and so far I'm pleased with how easy it is to work with. Initially we had planned to use UDDI for storing endpoint information.  However after discussing this with my client, we opted to look at BRE instead of UDDI since we're already storing transforms in BRE.  Fortunately making the change to BRE from UDDI was quite simple.  This solution of course has the added advantage of not needing to go through the convoluted process of registering our endpoints in UDDI. The first thing to remember if you want to do content based routing with BRE and ESB is that the pipleines included in the ESB toolkit don't include disassembler components.  This means that you'll need to first create a custom recieve pipeline with the necessary disassembler for your message type as well as the ESB components, itinerary selector and dispather. Next you need to create a BRE policy.  The ESB.ContextInfo vocabulary contains vocabulary links for the various items in the ESB context dictionary.  In this vocabulary, you'll find an item called Context Message Type, use this as the left hand side of your condition.  Set the right hand side to your message type, something like http://your.message.namespace/#yourrootelement.  Now find the ESB.EndPointInfo vocabulary.  This contains links to all the properties related to endpoint information.  Use the various set operators in your rule's action to configure your endpoint. In the example above, I'm using the WCF-SQL adapter. Now that the hard work is out of the way, you just need to configure the resolver in your itinerary. Nothing complicated here.  Just select BRE as your resolver implementation and select your policy from the drop-down list.  Note that when you select a policy, the Version field will be automatically filled in with the version of your policy.  If you leave this as-is, the resolver will always use that policy version.  Alternatively, you can clear the version number and the resolver will use the highest deployed version.

    Read the article

  • SQL Community – stronger than ever

    - by Rob Farley
    I posted a few hours ago about a reflection of the Summit, but I wanted to write another one for this month’s T-SQL Tuesday, hosted by Chris Yates. In January of this year, Adam Jorgensen and I joked around in a video that was used for the SQL Server 2012 launch. We were asked about SQLFamily, and we said how we were like brothers – how we could drive each other crazy (the look he gave me as I patted his stomach was priceless), but that we’d still look out for each other, just like in a real family. And this is really true. Last week at the PASS Summit, there was a lot going on. I was busy as always, as were many others. People told me their good news, their awful news, and some whinged to me about other people who were driving them crazy. But throughout this, people in the SQL Server community genuinely want the best for each other. I’m sure there are exceptions, but I don’t see much of this. Australians aren’t big on cheering for each other. Neither are the English. I think we see it as an American thing. It could be easy for me to consider that the SQL Community that I see at the PASS Summit is mainly there because it’s a primarily American organisation. But when you speak to people like sponsors, or people involved in several types of communities, you quickly hear that it’s not just about that – that PASS has something special. It goes beyond cheering, it’s a strong desire to see each other succeed. I see MVPs feel disappointed for those people who don’t get awarded. I see Summit speakers concerned for those who missed out on the chance to speak. I see chapter leaders excited about the opportunity to help other chapters. And throughout, I see a gentleness and love for people that you rarely see outside the church (and sadly, many churches don’t have it either). Chris points out that the M-W dictionary defined community as “a unified body of individuals”, and I feel like this is true of the SQL Server community. It goes deeper though. It’s not just unity – and we’re most definitely different to each other – it’s more than that. We all want to see each other grow. We all want to pull ourselves up, to serve each other, and to grow PASS into something more than it is today. In that other post of mine I wrote a bit about Paul White’s experience at his first Summit. His missus wrote to me on Facebook saying that she welled up over it. But that emotion was nothing about what I wrote – it was about the reaction that the SQL Community had had to Paul. Be proud of it, my SQL brothers and sisters, and never lose it.

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • Is it a bug or a task when something doesn't work, yet, in development process

    - by Patkos Csaba
    We usually have this dilemma in our team. Sometimes, in order to implement a task or a story we find out that the system must be in a specific state. For example, a specific system configuration has to be made beforehand. The task / story can be completed and it is working as specified on it with the proper configuration in place. Note that the configuration is not directly related with the task. Next, we have to create a new ... ??? ... something for the process of generating that configuration file. This is where the problems appear. Some say that it is a bug others say it is a task or an extra feature. So, where is the limit between bugs and tasks in the development phase? Should we even consider something a bug if all the tasks are working as stated in their definitions? Can a thing be considered a bug because one compares it to the current (unstable) state of the system? Short example: A feature requires configuring a communication service for a specific operation. In the process of the implementation the team discovers that the service requires the hostnames of the pears to be resolvable to an IP address. The team adds the hostnames to the DNS server (or hosts files) and continues implementing the required feature. After the initial feature is working, a question is risen. Should the sysadmin configure the DNS or hosts file or should our application do it automatically? An automatic solution is possible. So a decision is made to implement it. ... here start the discussions ... is this a bug or an extra feature / task? PS: I know that I mixed feature / task / story in the question. It is intentional. I am interested in separating bugs from the rest. Doesn't matter what the rest means in a particular case.

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • How To Deal With Terrible Design Decisions

    - by splatto
    I'm a consultant at one company. There is another consultant who is a year older than me and has been here 3 months longer than I have, and a full time developer. The full-time developer is great. My concern is that I see the consultant making absolutely terrible design decisions. For example, M:M relationships are being stored in the database as a comma-delimited string rather than using a conjunction table to hold the relationships. For example, consider two tables, Car and Property: Car records: Camry Volvo Mercedes Property records: Spare Tire Satellite Radio Ipod Support Standard Rather than making a table CarProperties to represent this, he has made a "Property" attribute on the Car table whose data looks like "1,3,7,13,19,25," I hate how this decision and others are affecting the quality of my code. We have butted heads over this design three times in the past two months since I've been here. He asked me why my suggestion was better, and I responded that our database would be eliminating redundant data by converting to a higher normal form. I explained that this design flaw in particular is discussed and discouraged in entry level college programs, and he responded with a shot at me saying that these comma-separated-value database properties are taught when you do your masters (which neither of us have). Needless to say, he became very upset and demanded I apologize for criticizing his work, which I did in the interest of not wanting to be the consultant to create office drama. Our project manager is focused on delivering a product ASAP and is a very strong personality - Suggesting to him at this point that we spend some time to do this right will set him off. There is a strong likelihood that both of our contracts will be extended to work on a second project coming up. How will I be able to exert dominant influence over the design of the system and the data model to ensure that such terrible mistakes are not repeated in the next project? A glimpse at the dynamics: I can be a strong personality if I don't measure myself. The other consultant is not a strong personality, is a poor communicator, is quite stubborn and thinks he is better than everyone else. The project manager is an extremely strong personality who is focused on releasing tomorrow's product yesterday. The full-time developer is very laid back and easy going, a very effective communicator, but is someone who will accept bad design if it means not rocking the boat. Code reviews or anything else that takes "time" will be out of the question - there is no way our PM will be sold on such a thing by anybody.

    Read the article

  • Create Outlook Appointments from PowerShell

    - by BuckWoody
    I've been toying around with a script to create a special set of calendar objects in Outlook that show when my SQL Server Agent Jobs are scheduled to run. I haven't finished yet, but I thought I would share the part that creates the Outlook Appointments.I have yet to fill a variable with the start and end times, and then loop through that to create the appointments. I'm thinking I'll make the script below into a function, and feed it those variables in a loop. The script below creates a whole new Calendar Folder in Outlook called "SQL Server Agent Jobs". I also use categories quite a bit, so you'll see that too. Caution: If you plan to play with this script, do it on an isolated workstation, not on your "regular" Outlook calendar. Otherwise, you'll have lots of appointments in there that you don't care about!  # Add a new calendar item to a new Outlook folder called "SQL Server Agent Jobs" $outlook = new-object -com Outlook.Application $calendar = $outlook.Session.folders.Item(1).Folders.Item("SQL Server Agent Jobs") $appt = $calendar.Items.Add(1) # == olAppointmentItem $appt.Start = [datetime]"03/11/2010 11:00" $appt.End = [datetime]"03/11/2009 12:00" $appt.Subject = "JobName" $appt.Location = "ServerName" $appt.Body = "Job Details" $appt.Categories = "SQL server Agent Job" $appt.Save()   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Strange rendering in XNA/Monogame

    - by Gerhman
    I am trying to render G-Code generated for a 3d-printer as the printed product by reading the file as line segments and the drawing cylinders with the diameter of the filament around the segment. I think I have managed to do this part right because the vertex I am sending to the graphics device appear to have been processed correctly. My problem I think lies somewhere in the rendering. What basically happens is that when I start rotating my model in the X or Y axis then it renders perfectly for half of the rotation but then for the other half it has this weird effect where you start seeing through the outer filament into some of the shapes inside. This effect is the strongest with X rotations though. Here is a picture of the part of the rotation that looks correct: And here is one that looks horrible: I am still quite new to XNA and/Monogame and 3d programming as a whole. I have no idea what could possibly be causing this and even less of an idea of what this type of behavior is called. I am guessing this has something to do with rendering so have added the code for that part: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); basicEffect.World = world; basicEffect.View = view; basicEffect.Projection = projection; basicEffect.VertexColorEnabled = true; basicEffect.EnableDefaultLighting(); GraphicsDevice.SetVertexBuffer(vertexBuffer); RasterizerState rasterizerState = new RasterizerState(); rasterizerState.CullMode = CullMode.CullClockwiseFace; rasterizerState.ScissorTestEnable = true; GraphicsDevice.RasterizerState = rasterizerState; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, vertexBuffer.VertexCount); } base.Draw(gameTime); } I don't know if it could be because I am shading something that does not really have a texture. I am using this custom vertex declaration I found on some tutorial that allows me to store a vertex with a position, color and normal: public struct VertexPositionColorNormal { public Vector3 Position; public Color Color; public Vector3 Normal; public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); } If any of you have ever seen this type of thing please help. Also, if you think that the problem might lay somewhere else in my code then please just request what part you would like to see in the comments section.

    Read the article

  • How do I import my first sprites?

    - by steven_desu
    Continuing from this question (new question - now unrelated) So I have a thorough background in programming already (algorithms, math, logic, graphing problems, etc.) however I've never attempted to code a game before. In fact, I've never had anything more than minimal input from a user during the execution of a program. Generally input was given from a file or passed through console, all necessary functions were performed, then the program terminated with an output. I decided to try and get in on the world of game development. From several posts I've seen around gamedev.stackexchange.com XNA seems to be a favorite, and it was recommended to me when I asked where to start. I've downloaded and installed Visual Studio 2010 along with the XNA Framework and now I can't seem to get moving in the right direction. I started out looking on Google for "xna game studio tutorial", "xna game development beginners", "my first xna game", etc. I found lots of crap. The official "Introduction to Game Studio 4.0" gave me this (plus my own train of thought happily pasted on top thanks to MSPaint): http://tinypic.com/r/2w1sgvq/7 The "Get Additional Help" link (my best guess, since there was no "Continue" or "Next" link) lead me to this page: http://tinypic.com/r/2qa0dgx/7 I tried every page. The forum was the only thing that seemed helpful, however searching for "beginner", "newbie", "getting started", "first project", and similar on the forums turned up many threads with specific questions that are a bit above my level ("beginner to collision detection", for instance) Disappointed I returned to the XNA Game Studio home page. Surely their own website would have some introduction, tutorial, or at least a useful link to a community. EVERYTHING on their website was about coding Windows Phone 7.... Everything. http://tinypic.com/r/10eit8i/7 http://tinypic.com/r/120m9gl/7 Giving up on any official documentation after a while, I went back to Google. I managed to locate www.xnadevelopment.com. The website is built around XNA Game Studio 3.0, but how different can 3.0 be from 4.0?.... Apparently different enough. http://tinypic.com/r/5d8mk9/7 http://tinypic.com/r/25hflli/7 Figuring that this was the correct folder, I right-clicked.... http://tinypic.com/r/24o94yu/7 Hmm... maybe by "Add Content Reference" they mean "Add a reference to an existing file (content)"? Let's try it (after all- it's my only option) http://tinypic.com/r/2417eqt/7 At this point I gave up. I'm back. My original goal in my last question was to create a keyboard-navigable 3D world (no physics necessary, no logic or real game necessary). After my recent failures my goal has been revised. I want to display an image on the screen. Hopefully in time I'll be able to move it with the keyboard.

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server Analysis Services to Hive

    - by pinaldave
    The SQL Server Analysis Service is a very interesting subject and I always have enjoyed learning about it. You can read my earlier article over here. Big Data is my new interest and I have been exploring it recently. During this weekend this blog post caught my attention and I enjoyed reading it. Big Data is the next big thing. The growth is predicted to be 60% per year till 2016. There is no single solution to the growing need of the big data available in the market right now as well there is no one solution in the business intelligence eco-system available as well. However, the need of the solution is ever increasing. I am personally Klout user. You can see my Klout profile over. I do understand what Klout is trying to achieve – a single place to measure the influence of the person. However, it works a bit mysteriously. There are plenty of social media available currently in the internet world. The biggest problem all the social media faces is that everybody opens an account but hardly people logs back in. To overcome this issue and have returned visitors Klout has come up with the system where visitors can give 5/10 K+ to other users in a particular area. Looking at all the activities Klout is doing it is indeed big consumer of the Big Data as well it is early adopter of the big data and Hadoop based system.  Klout has to 1 trillion rows of data to be analyzed as well have nearly thousand terabyte warehouse. Hive the language used for Big Data supports Ad-Hoc Queries using HiveQL there are always better solutions. The alternate solution would be using SQL Server Analysis Services (SSAS) along with HiveQL. As there is no direct method to achieve there are few common workarounds already in place. A new ODBC driver from Klout has broken through the limitation and SQL Server Relation Engine can be used as an intermediate stage before SSAS. In this white paper the same solutions have been discussed in the depth. The white paper discusses following important concepts. The Klout Big Data solution Big Data Analytics based on Analysis Services Hadoop/Hive and Analysis Services integration Limitations of direct connectivity Pass-through queries to linked servers Best practices and lessons learned This white paper discussed all the important concepts which have enabled Klout to go go to the next level with all the offerings as well helped efficiency by offering a few out of the box solutions. I personally enjoy reading this white paper and I encourage all of you to do so. SQL Server Analysis Services to Hive Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Random thoughts on Monday

    - by user10760339
    I know that it has been a long time since my last post, just though that I would update you my latest thoughts of Governance. I just recently completed an executive round table series on EA and Cloud in Singapore, Indonesia and Malaysia. The response was phenomenal. The key point of the session was that Enterprise is the key enabler of innovation - All companies want to drive to be market leaders, EA can lay the foundation for the path to deliver that at innovation. When it comes to innovation, I see two distinct types: (a) Passive innovation is where a company creates innovation thought increments improvement over time. A great example is when airlines went from paper tickets to electronic ticket. Next logical progression is to do the same with boarding passes. There are a lot of examples to choose from, thought the thing to keep in mind, is that passive innovation will only keep you in the lead, it won’t allow you to create new markets or jump from #3 to #1 in one go. For that we need another type of innovation. (b) Disruptive innovation is where you create market where none existed before. Thought very difficult to do and requires significant investment in research, product and software development and not least of all, visionary thinking and timing, if done correctly, can turn the world on it’s ear. A great example is Apple iTunes. Some might say that this is incremental innovation, but only in one aspect, the downloading of music. Other then that, it’s all disruptive innovation. Being able to buy a single song rather then the album fundamentally changed the way we get out music. Behind all of these types of innovation is Enterprise Architecture. EA creates the infrastructure foundation, then delivery systems and the end-user experience to deliver this innovation. At Oracle, we are driving that EA innovation with our private cloud offerings from “bolt-to-glass” as I like to say. For more on what Oracle has to offer in EA and cloud, have a look at Cloud Computing | Oracle and Enterprise Architecture - OracleI am working on new material that I will be posting in a couple of weeks, so check back regularly for new updates or feel free to subscript for updates.

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >