Search Results

Search found 22224 results on 889 pages for 'point of sale'.

Page 601/889 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • The Windows 8 and Ubuntu 12.04 Dual Boot NIghtmare

    - by Steve
    I have done some research as to how to go about this dual-boot, and I am close, but I need some guidance with booting into Windows 8 (Ubuntu is installed). I have a Lenovo Ideapad y510p. I will go over what I have done to dual-boot this laptop, with windows 8 pre-installed, with Ubuntu 12.04: I followed every instruction to the letter for the 97-vote response here, and everything worked fine up until after the repair boot section: Installing on a Pre-Installed Windows 8 System (UEFI Supported) I ran into the following error upon restarting after the repair boot section: error: invalid arch independent elf magic. This error (a grub issue) disabled me from booting into Ubuntu :( After a little googling, I followed the instructions in the reactivating grub 2 section to resolve the error: http://kb.acronis.com/content/1686 I found a possible solution to fixing the Windows 8 boot issue, and tried it: http://webcache.googleusercontent.com/search?q=cache:i9JMyXzzRpYJ:askubuntu.com/questions/279275/dual-boot-problem-windows-8-ubuntu-12-04+&cd=1&hl=en&ct=clnk&gl=us&client=ubuntu I thought the above solution worked, but when I attempt to boot into Windows 8, I get the following missing file error: File: \Boot\BCD Status: 0xc000000e Info: The Boot Configuration Data for your PC is missing or contains errors. Here is some other information that may be useful: I have 3 partitions devoted to Ubuntu. The first, sda8, has a flag bios_grub (1049 kb). The second, sda9, is where everything else is (96.6 GB). The last, sda10, is for swap (8299 MB). My question is: How do I fix the boot configuration for Windows 8? Any help would be greatly appreciated :) Update 1: When I attempt to boot into UEFI mode, I get the following error: invalid arch independent elf magic (the same error I saw in step 2). Update 2: A useful link here I found: Dual booting Ubuntu 12.04: UEFI and Legacy So, this is my 4th time installing Ubuntu on the laptop, and it looks like I need to install it in UEFI mode. Should I scrap it all again, and reinstall? Or is there ANY way of salvaging my installation? At this point, I can't even boot into Windows (although I have an installation cd to fix the windows boot issue, that would ultimately screw over ubuntu). Update 3: After doing a little more browsing around, I found a cool way around this messy grub stuff, using rEFInd. Rod Smith's post here saved me! Installing ubuntu 12.04.02 in uefi mode Now, I am able to dual-boot Windows 8 and Ubuntu and boot into both operating systems :) I have another issue (relating to the boot configuration in the bios) that I will post as a separate question :)

    Read the article

  • 302: this blog will be closed

    - by preishuber
    After nearly 7 years I will discontinue blogging on this site. My resources are limited. You can reach my German blog which is used to support my customers. Looking back to a long an interesting journey ASP.NET by ScottGu That was the reason to attend this site and support Microsoft as much as I can. For that I was honored as ASP.NET MVP- thanks again. Meet Scoot several times. Great guy! Forums I have left NNTP forums a few years ago and now Microsoft closed it- It was my idea ;-) AJAX Was the wrong way- JQuery won the game IIS7 That is really a great plattform and the IIS team rules. I am sad that is so silent around that topic. ASP.NET after 2.0 Is no longer my world. I love ASP.NET and ASP.NET Server controls. I hate the discussion about how to follow the holy rules of MVC. Microsoft have dropped the goal to bring ASP.NET to #1 and accepted PHP is it. Facebook & Twittering Microblogging takes over a part of the blogging business. Shorter faster cheaper- or as SteveB mentioned - do more with less. Google Google is taking over the web. I am using Bing every time as I can but Google have more options. Sorry Microsoft you will loose that game. Apple That is not the biggest problem of Microsoft. the Ixxx takes over a small part but big money of the market, but the customers are not strongly linked. New wave new hype- Game over Apple. Silverlight My new home. I can reuse a lot of my skills and love the possibilitys. Silverligth will passing WPF-and strike Flash Windows phone 7 Also my skills fit. I just will use it for fun. I am not really satisfied about what I have heard from MIX. Guys from Redmond, I am sad to say you have been the best Smartphone OS and lost everything. The ADO vNext Story That will be the next mystic point. WCF, REST, JSON, ATOM and now OData. Nothing about SQL commands. LINQ, ORM is also not the final solution for multilayered disconnected async scenarios. Personally I prefere the OData idea and dislike the Swiss Army Knife (German Eierlegende Wollmilchsau) WCF. I am still in INETA Speakers board and I am glad to come to your user group. In all other cases you can hire me over ppedv AG. Good by and have good live.

    Read the article

  • Ubuntu 12.04 won't load - hangs at Busybox v1.18.5 / initramfs

    - by Marty
    I want to start by saying that I am very new with Linux (about 1 month using it). I have had no problems up until now. I am running Ubuntu 12.04 from a Toshiba laptop with 250 GB hard drive and 3 GB of ram. Everything worked fine yesterday. The only changes I made was was that I downloaded Banshee to try as a replacement for Rhythmbox and did a few recommended updates. This morning I tried to boot and it took a long time and I finally got this error: mount: mounting /dev/disk/by-uuid/02bc41cc-1e21-4700-a179-be2805a658c4 on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn't have requested /sbin/init. No init found. Try passing init= bootarg. BusyBox v1.18. (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands (initramfs) I'm not sure what to do beyond this point. I have read around on here and haven't found the help I need. I did try to boot it from the Live CD. I can boot up to the Try Ubuntu/Install Ubuntu screen. When I go through the Try Ubuntu selection I can't access my hard disk. When I clicked on it I got this error: Unable to mount 247 GB Filesystem Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda/1, missing codepage or helper program, or other error. In some cases useful info is found in syslog - try dmesg|tail or so. I tried dmesg|tail and saw a string of values but nothing that looked helpful. I have also tried to boot from the GRUB screen as recovery mode and previous Linux version but they didn't work either. I tried to load Windows Recovery Environment (loader) (on /dev/sdc3) and got this message: error: no such device: 268057B1805785E9 error: hd1 cannot get C/H/S values I had saw somewhere that I could fix this with the Live CD but my knowledge isn't good enough to try. I tried something with Gpart that I had read, but the system told me that I didn't have Gpart. Could someone please explain to me what I need to do and/or haven't tried yet. Thanks so much!

    Read the article

  • An update on using Rosetta Stone: Studio now isn't very useful and is not great value as an add-on option

    - by Greg Low
    I had a surprisingly large number of responses from my previous posting about learning Chinese. An update for those considering Rosetta Stone (www.rosettastone.com) for Chinese, Spanish or any other language that they offer:I had to renew my "Studio" subscription today and it's now a much worse deal than it was.It's now $75 for 6 months for Studio sessions. Online classes used to be 45 mins. Recently they reduced them to 20 mins. Given how often people have connection issues, etc. that 20 mins can disappear very quickly.They've also reduced the number you can attend. You used to be able to have 2 scheduled at any point in time. Now they limit you to 2 "group sessions" per month during the period. (You can pay for additional private sessions). The combination of these two changes now makes it much less useful. Two x 20 min sessions per month is an almost meaningless amount of practice. They also now automatically change you to auto-renew when you subscribe. They tell you where to remove this auto-renewal but the first 4 or 5 times that I went into that screen, no such option appeared. Later, an option did appear and I used it.Overall, things just aren't what they used to be at Rosetta Stone. It's now pretty hard to recommend the Studio option where it was a no-brainer before.FURTHER UPDATE: <sigh>Even after I renewed, I could not even connect to their "new" service. Although the system processed the renewal, it still tells me it's expired. My online chat person "Siva S" tells me that the problem is that I've purchased all 5 levels of the program. I can't wait till they explain to me how making an extra purchase from them stops me from logging on. Siva told me that they had "renewed" the program. I'd have to speak to Customer Care; they aren't available and then disconnected himself. Impressive (not).Their website is now full of issues too. It insists that my billing address is in the USA, even though it pretends to accept changes to it.Overall, it's gone from something that could be recommended (with some limitations) to now being an app to avoid. That's a pity as I liked much of it before.

    Read the article

  • SQL SERVER – Get 2 of My Books FREE at Koenig Tech Day – Where Technologies Converge!

    - by pinaldave
    As a regular reader of my blog – you must be aware of that I love to write books and talk about various subjects of my book. The founders of Koenig Solutions are my very old friends, I know them for many years. They have been my biggest supporter of my books. Coming weekend they have a technology event at their Bangalore Location. Every attendee of the technology event will get a set of two books worth Rs. 450 – ‘SQL Server Interview Questions And Answers‘ and ‘SQL Wait Stats Joes 2 Pros‘. I am going to cover a couple of topics of the books and present  as well. I am very confident that every attendee will be having a great time. I will be covering following subjects: SQL Server Tricks and Tips for Blazing Fast Performance Slow Running Queries (SQL) are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how SQL Server has been set up. The session will focus on the ways of identifying problems that slow down SQL Servers, and tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. After the session is over – I will point to what exact location in the book where you can continue for the further learning. I am pretty excited, this is more like book reading but in entire different format. The one day event will cover four technologies in four separate interactive sessions on: Microsoft SQL Server Security VMware/Virtualization ASP.NET MVC Date of the event: Dec 15, 2012 9 AM to 6PM. Location of the event:  Koenig Solutions Ltd. # 47, 4th Block, 100 feet Road, 3rd Floor, Opp to Shanthi Sagar, Koramangala, Bangalore- 560034 Mobile : 09008096122 Office : 080- 41127140 Organizers have informed me that there are very limited seats for this event and technical session based on my book will start at Sharp 9 AM. If you show up late there are chances that you will not get any seats. Registration for the event is a MUST. Please visit this link for further information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Is there an appropriate coding style for implementing an algorithm during an interview?

    - by GlenPeterson
    I failed an interview question in C years ago about converting hex to decimal by not exploiting the ASCII table if (inputDigitByte > 9) hex = inputDigitByte - 'a'. The rise of Unicode has made this question pretty silly, but the point was that the interviewer valued raw execution speed above readability and error handling. They tell you to review algorithms textbooks to prepare for these interviews, yet these same textbooks tend to favor the implementation with the fewest lines of code, even if it has to rely on magic numbers (like "infinity") and a slower, more memory-intensive implementation (like a linked list instead of an array) to do that. I don't know what is right. Coding an algorithm within the space of an interview has at least 3 constraints: time to code, elegance/readability, and efficiency of execution. What trade-offs are appropriate for interview code? How much do you follow the textbook definition of an algorithm? Is it better to eliminate recursion, unroll loops, and use arrays for efficiency? Or is it better to use recursion and special values like "infinity" or Integer.MAX_VALUE to reduce the number of lines of code needed to write the algorithm? Interface: Make a very self-contained, bullet-proof interface, or sloppy and fast? On the one extreme, the array to be sorted might be a public static variable. On the other extreme, it might need to be passed to each method, allowing methods to be called individually from different threads for different purposes. Is it appropriate to use a linked-list data structure for items that are traversed in one direction vs. using arrays and doubling the size when the array is full? Implementing a singly-linked list during the interview is often much faster to code and easier remember for recursive algorithms like MergeSort. Thread safety - just document that it's unsafe, or say so verbally? How much should the interviewee be looking for opportunities for parallel processing? Is bit shifting appropriate? x / 2 or x >> 1 Polymorphism, type safety, and generics? Comments? Variable and method names: qs(a, p, q, r) vs: quickSort(theArray, minIdx, partIdx, maxIdx) How much should you use existing APIs? Obviously you can't use a java.util.HashMap to implement a hash-table, but what about using a java.util.List to accumulate your sorted results? Are there any guiding principals that would answer these and other questions, or is the guiding principal to ask the interviewer? Or maybe this should be the basis of a discussion while writing the code? If an interviewer can't or won't answer one of these questions, are there any tips for coaxing the information out of them?

    Read the article

  • What source code organization approach helps improve modularity and API/Implementation separation?

    - by Berin Loritsch
    Few languages are as restrictive as Java with file naming standards and project structure. In that language, the file name must match the public class declared in the file, and the file must live in a directory structure matching the class package. I have mixed feelings about that approach. While I never have to guess where a file lives, there's still a lot of empty directories and artificial constraints. There's several languages that define everything about a class in one file, at least by convention. C#, Python (I think), Ruby, Erlang, etc. The commonality in most these languages is that they are object oriented, although that statement can probably be rebuffed (there is one non-OO language in the list already). Finally, there's quite a few languages mostly in the C family that have a separate header and implementation file. For C I think this makes sense, because it is one of the few ways to separate the API interface from implementations. With C it seems that feature is used to promote modularity. Yet, with C++ the way header and implementation files are split seems rather forced. You don't get the same clean API separation that you do with C, and you are forced to include some private details in the header you would rather keep only in the implementation. There's quite a few languages that have a concept that overlaps with interfaces like Java, C#, Go, etc. Some languages use what feels like a hack to provide the same concept like C# using pure virtual abstract classes. Still others don't really have an interface concept and rely on "duck" typing--for example Ruby. Ruby has modules, but those are more along the lines of mixing in behaviors to a class than they are for defining how to interact with a class. In OO terms, interfaces are a powerful way to provide separation between an API client and an API implementation. So to hurry up and ask the question, from a personal experience point of view: Does separation of header and implementation help you write more modular code, or does it get in the way? (it helps to specify the language you are referring to) Does the strict file name to class name scheme of Java help maintainability, or is it unnecessary structure for structure's sake? What would you propose to promote good API/Implementation separation and project maintenance, how would you prefer to do it?

    Read the article

  • Why wearing Jeans is considered unprofessional?

    - by Gopinath
    When I started my career 9 years ago I use to wear casual wear to office – Jeans & T-Shirts all the 5 days. The environment at workplace during those days encouraged me to be casual and many of my colleagues use to come in Jeans. We just started our career those days it was perfectly fine to be in casual. As I grow up in the ladder, I started feeling the discomfort of wearing Jeans at work. During clients visits, senior managers meetings and consultations I was an odd man in the crowd as the rest of them are in formals. In order to be one among the professionals I’m forced change my dressing style and start wearing formals. But  the question of “Why wearing jeans to workplace is considered as unprofessional?” use in linger in my mind till today. I got the answer to my question from a discussion thread on Quora When they were invented, jeans were associated with blue-collar work. They were meant to get muddy and gross and take lots of abuse without falling apart, even if you wore the same pair every day. The people who bought them were the ones whose lives required durable clothing. And another commenter says… A professional image is critical to cementing business relationships, and part of that is, for right or wrong, how you dress. Jeans are typically associated with "kicking back", relaxation, leisure, informality,  and even a slightly rebellious flavor. The style and condition of the jeans are a consideration, as we often wear jeans into advanced states of being worn down, with tearing, etc.. that we generally do not do with other clothing items. I agree with this theory even though it may be centuries old. If you want to look like a professional and treated like a professional it’s better to be dress up in formals. These days I make a point to be in formals at workplace. Not everyone is Steve Jobs to wear a Jean & Turtle Neck T-shirt  right? CC Image credit flickr/exey

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • Why do some user agents have spam urls in them?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • Fonts look squashed or stretched in the browser on Ubuntu

    - by Arjun Menon
    Fonts in the browser on Ubuntu look look squashed/stretched compared to Windows/OSX. This image shows exactly what I mean: http://i.stack.imgur.com/suUXX.png I installed msttcorefonts and configured both Chrome & FF to use Microsoft fonts (Arial, Times New Roman) instead of the default ones. While MS fonts made web pages appear a bit different, regardless of what font it was the squashed/stretched look remained. FreeSans looks a little different from Arial, but it too is rendered squashed/stretched like Arial on both FF & Chrome. Opera renders the Wikipedia page differently from FF & Chrome, but the fonts looks squashed/stretched on it as well. I used to run Kubuntu prior to switching to Ubuntu and at some point I managed to get the fonts on Chrome (only Chrome) look exactly like in the image on the left. I have no idea how I did it though. Firefox and Rekonq retained the squashed/stretched look. I had been using Rekonq for a while, then switched to FF. While using both browsers I had done various things to get the fonts to look better on them with no success - like installing MS fonts & configuring both browsers to use them. I then, after some time, installed Chrome and the fonts magically looked perfect on them - just like on right-hand side of the image. In fact, the font smoothing looked better (to my eye) compared to Windows and OSX. All 3 OSes use subtly different font smoothing strategies and the differences stand out. Later, I formatted & installed Ubuntu 12.04. The first thing I did was install msttcorefonts & then install Chrome. To my dismay, the fonts on Chrome looked just as squashed/stretched as it did in Firefox. There's no browser (except Wine Internet Explorer) that renders fonts properly on my Ubuntu setup right now. Fixing this is definitely possible, since I was able to do it on Kubuntu, but apparently it requires some mysterious tweaking. Would anyone be willing to help me out?

    Read the article

  • TDD vs. Productivity

    - by Nairou
    In my current project (a game, in C++), I decided that I would use Test Driven Development 100% during development. In terms of code quality, this has been great. My code has never been so well designed or so bug-free. I don't cringe when viewing code I wrote a year ago at the start of the project, and I have gained a much better sense for how to structure things, not only to be more easily testable, but to be simpler to implement and use. However... it has been a year since I started the project. Granted, I can only work on it in my spare time, but TDD is still slowing me down considerably compared to what I'm used to. I read that the slower development speed gets better over time, and I definitely do think up tests a lot more easily than I used to, but I've been at it for a year now and I'm still working at a snail's pace. Each time I think about the next step that needs work, I have to stop every time and think about how I would write a test for it, to allow me to write the actual code. I'll sometimes get stuck for hours, knowing exactly what code I want to write, but not knowing how to break it down finely enough to fully cover it with tests. Other times, I'll quickly think up a dozen tests, and spend an hour writing tests to cover a tiny piece of real code that would have otherwise taken a few minutes to write. Or, after finishing the 50th test to cover a particular entity in the game and all aspects of it's creation and usage, I look at my to-do list and see the next entity to be coded, and cringe in horror at the thought of writing another 50 similar tests to get it implemented. It's gotten to the point that, looking over the progress of the last year, I'm considering abandoning TDD for the sake of "getting the damn project finished". However, giving up the code quality that came with it is not something I'm looking forward to. I'm afraid that if I stop writing tests, then I'll slip out of the habit of making the code so modular and testable. Am I perhaps doing something wrong to still be so slow at this? Are there alternatives that speed up productivity without completely losing the benefits? TAD? Less test coverage? How do other people survive TDD without killing all productivity and motivation?

    Read the article

  • What if you could work on anything you wanted?

    - by red@work
    This week we've downed our tools and organised ourselves into small project teams or struck out alone. We're working on whatever we like, with whoever we like, wherever we like. We've called it Down Tools week and so far it's a blast. It all started a few months ago with an idea from Neil, our CEO. Neil wanted to capture the excitement, innovation, and productivity of Coding by the Sea and extend this to all Red Gaters working in Product Development. A brainstorm is always a good place to start for an "anything goes" project. Half of Red Gate piled into our largest meeting room (it's pretty big) armed with flip charts, post its and a heightened sense of possibility. An hour or so later our SQL Servery walls were covered in project ideas. So what would you do, if you could work on anything you wanted? Many projects are related to tools we already make, others are for internal product development use and some are, well, just something completely different. Someone suggested we point a web cam at the SQL Servery lunch queue so we can check it before heading to lunch. That one couldn't wait for Down Tools Week. It was up and running within a few days and even better, it captures the table tennis table too. Thursday is the Show and Tell - I am looking forward to seeing what everyone has come up with. Some of the projects will turn into new products or features so this probably isn't the time or place to go into detail of what is being worked on. Rest assured, you'll hear all about it! We're making a video as we go along too which will be up on our website as soon. In the meantime, all meetings are cancelled, we've got plenty of food in and people are being very creative with the £500 expenses budget (Richard, do you really need an iPad?). It's brilliant to see it all coming together from the idea stage to reality. Catch up with our progress by following #downtoolsweek on Twitter. Who knows, maybe a future Red Gate flagship tool is coming to life right now? By the way, it's business as usual for our customer facing and internal operations teams. Hmm, maybe we can all down tools for a week and ask Product Development to hold the fort? Post by: Alice Chapman

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. * If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project? EDIT *This bit seemed to get a lot of attention. What I mean is I could probably do this for say, 30 users, spending a few tens of thousands of pounds. I'm not including stuff I don't know about the medical industry and government, but I think most people who've been around programming are familiar with that kind of database/front end kind of design. My point is the NHS project looks like a BIG version of this, with bells and whistles, notably security. But surely a budget millions of times larger than mine could provide this?

    Read the article

  • Connect Team Foundation Service/TFS 2012 with Visual Studio 2010 &amp; Visual Studio 2008

    - by Vishal
    Hello, Microsoft finally released the Team Foundation Service in late October 2012 after its long time in the preview phase. I was already using the TFS Preview which was free but I was happy to see Microsoft releasing the Team Foundation Service also FREE for upto 5 users. Isn't that great news? I know there are bunch of other free source control repositories (Github, Bitbucket, SVN etc.) out there but I somehow like TFS better. Also the other good thing about the final release was that I didn’t had to do any kind of migration of my code from preview to final release version. Just changed the TFS connection URL and it worked like a charm. Anyways, if you are a startup with small team and need some awesome Source Control along with all the good Project Management, Continuous Integration (Build, Test, Deploy), Team Collaboration, Agile/Scrum planning etc. features than Team Foundation Service is your answer. Microsoft has not yet released their pricing for more than 5 users and will be releasing it sometime in early 2013. What if as of now you have a team more than 5 users and you want to use Team Foundation Service, the good news is you can use it for FREE but when they release the final pricing, you will have to transition to the paid plan. Lot of story, getting to the point, connecting to Team Foundation Service with Visual Studio 2012 is straight forward and would work out of the box but it wont for previous versions of Visual Studio. You will have to upgrade to the latest service pack first and than install the forward compatibility pack. (1st : Service Packs & 2nd: Forward Compatibility packs) For Visual Studio 2010: Visual Studio 2010 Service Pack 1. Visual Studio 2010 forward compatibility for TFS 2012 and Team Foundation Service.         For Visual Studio 2008: Visual Studio 2008 Service Pack 1. Visual Studio 2008 forward compatibility for TFS 2012 & Team Foundation Service. Restart your system. Visual Studio 2008 will not work if you only put https://xxx.visualstudio.com. You will have to put your collection name too as shown below.       By the way, it doesn’t matter if you are an Apple Application Developer or Android App Developer, you can still use Team Foundation Service as your source control. Below are few links to connect to Team Foundation Service with other IDEs: Connect Eclipse to Team Foundation Service. Connect XCode to Team Foundation Service. Happy coding. Vishal Mody

    Read the article

  • Will polishing my current project be a better learning experience than starting a new one?

    - by Alejandro Cámara
    I started programming many years ago. Now I'm trying to make games. I have read many recommendations to start cloning some well known games like galaga, tetris, arkanoid, etc. I have also read that I should go for the whole game (including menus, sound, score, etc.). Yesterday I finished the first complete version of my arkanoid clone. But it is far from over. I can still work on it for months (I program as a hobby in my free time) implementing a screen resolution switcher, remap of the control keys, power-ups falling from broken bricks, and a huge etc. But I do not want to be forever learning how to clone ONE game. I have the urge to get to the next clone in order to apply some design ideas I have come upon while developing this arkanoid clone (at the same time I am reading the GoF book and much source code from Ludum Dare 21 game contest). So the question is: Should I keep improving the arkanoid clone until it has all the features the original game had? or should I move to the next clone (there are almost infinite games to clone) and start mending the things I did wrong with the previous clone? This can be a very subjective question, so please restrain the answers to the most effective way to learn how to make my own games (not cloning someone ideas). Thank you! CLARIFICATION In order to clarify what I have implemented I make this list: Features implemented: Bouncing capabilities (the ball bounces on walls, on bricks, and on the bar). Sounds when bouncing on bricks and the bar, and when the player wins or loses. Basic title menu (new game and exit only). Also in-game menu and win/lose menus. Only three levels, but the map system is so easy I do not think it will teach me much (am I wrong?). Features not-implemented: Power-ups when breaking the bricks. Complex bricks (with more than one "hit point" and invincible). Better graphics (I am not really good at it). Programming polishing (use more intensively the design patterns). Here's a link to its (minimal) webpage: http://blog.acamara.es/piperine/ I kind of feel ashamed to show it, so please do not hit me too hard :-) My question was related to the not-implemented features. I wondered what was the fastest (optimal) path to learn. 1) implement the not-implemented features in this project which is getting big, or 2) make a new game which probably will teach me those lessons and new ones. ANSWER I choose @ashes999 answer because, in my case, I think I should polish more and try to "ship" the game. I think all the other answers are also important to bear in mind, so if you came here having the same question, before taking a rush decision read all the discussion. Thank you all!

    Read the article

  • Any suggested approaches to track bugs/defects?

    - by deostroll
    What is the best way to track defect sources in tfs? We have various teams for a project like the vulnerability team, the customer, pre-sales, etc. We give a build and these teams independently test it. They do not have access to our tfs system. So they usually send in their defects via email. It will usually be send in an excel format. Our testing team takes these up and logs them into tfs. Sometimes they modify the original defect description (in excel) and add the expected/actual results. Sometimes they miss to cite the source. I am talking about managing the various sources as such. Is there a way we can add these sources into tfs, and actually link this particular source with the defects, with individual comments associated with them (saying where in the source we can find the actual material for the defect). Edit: I don't know if there is a way to manage various sources. Consider this: the vulnerability assessment team has come out with defects/suggestions. They captured it into an excel and passed that on to the testing team (in my case). The testing team takes the responsibility of elaborating the defect and logging it in tfs. Now say that the excel has come with 20 defect items. This is my source. (It answers the question where did this defect come from). So ultimately when I am looking at a bug I know from where it came from - I'll ultimately be looking at the email sent from the VA team which has the excel or the excel file itself sent by the VA team. It may be one of the 20 items in that excel. How should the tester link to this source just once? On the contrary, it does not make sense for the tester to attach the same excel 20 times (i.e. attach the same excel for the 20 defects while logging it into tfs) right? I hope you get my point.

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • Code review recommendations and Code Smells

    - by Michael Freidgeim
    Some time ago Twitter told that I am similar to Boris Lipschitz . Indeed he is also .Net programmer from Russia living in Australia. I‘ve read his list of Code Review points and found them quite comprehensive. A few points  were not clear for me, and it forced me for a further reading.In particular the statement “Exception should not be used to return a status or an error code.” wasn’t fully clear for me, because sometimes we store an exception as an object with all error details and I believe it’s a valid approach. However I agree that throwing exceptions should be avoided, if you expect to return error as a part of a normal flow. Related link: http://codeutopia.net/blog/2010/03/11/should-a-failed-function-return-a-value-or-throw-an-exception/ Another point slightly puzzled me“If Thread.Sleep() is used, can it be replaced with something else, ei Timer, AutoResetEvent, etc” . I believe, that there are very rare cases, when anyone using Thread.Sleep in any production code. Usually it is used in mocks and prototypes.I had to look further to clarify “Dependency injection is used instead of Service Location pattern”.Even most of articles has some preferences to Dependency injection, there are also advantages to use Service Location. E.g see http://geekswithblogs.net/KyleBurns/archive/2012/04/27/dependency-injection-vs.-service-locator.aspx. http://www.cookcomputing.com/blog/archives/000587.html  refers to Concluding Thoughts of Martin Fowler The choice between Service Locator and Dependency Injection is less important than the principle of separating service configuration from the use of services within an applicationThe post had a link to excellent article Code Smells of Jeff Atwood, but the statement, that “code should not pass a review if it violates any of the  code smells” sound too strict for my environment. In particular, I disagree with “Dead Code” recommendation “Ruthlessly delete code that isn't being used. That's why we have source control systems!”. If there is a chance that not used code will be required in a future, it is convenient to keep it as commented or #if/#endif blocks with appropriate explanation, why it could be required in the future. TFS is a good source control system, but context search in source code of current solution is much easier than finding something in the previous versions of the code.I also found a link to a good book “Clean Code.A.Handbook.of.Agile.Software”

    Read the article

  • How To: Spell Check InfoPath web form in SharePoint 2010

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/11/07/how-to-spell-check-infopath-web-form-in-sharepoint-2010.aspxThis is a sequel to my 2011 post about How To: Spell Check InfoPath Web Form in SharePoint. This time I will share how I managed to achieve Spell Checking in SharePoint 2010. This time round, we have changed our Online Forms strategy to use Custom lists instead of Form Libraries. I thought everything will be smooth sailing as we are using all OOTB features. So, we customised a Custom list form using InfoPath and added a few Rich Text Boxes (Spell Check is a requirement for this specific project). All is good in the InfoPath client including the Spell Checker so, happy days, I published straight away.Here comes the surprises now. I browsed to my Custom List and clicked Add New Item. This launched my Form in a modal dialog format. I went to my Rich Text Boxes to check the spell checker, and voila, it's disabled!I tried hacking the FormServer.aspx and the CustomSpellCheckEntirePage.js again but the new FormServer.aspx behaves differently than of MOSS 2007's. I searched for answers in many blogs to no avail. Often ending up being linked to my old blog post. I also tried placing the spell check javascript into a Content Editor Webpart of the Item's New Form and Edit form. It is launching the Spell Check dialog but it's not spellchecking the page correctly.At this point, I decided I needed to get my project across ASAP so enough with experimentations and logged a ticket with Microsoft Premier Support.On a call with the Support Engineer, I browsed through the Custom List and to the item to demonstrate my problem. Suddenly, the Spell Check tab in the toolbar is now Enabled! Surprised? Not much, it's Microsoft!Anyway, to cut my story short, here is a summary of my solution:Navigate to your Custom ListIn the Ribbon Toolbar, navigate to List > Customize List > Form Web Parts > Content Type Forms > (Item) New Form. This will display the newifs.aspx which is the page displayed when Add New Item is clicked. This page, just like any other SharePoint page, contains webparts. In this case, we have the InfoPath Form Web Part.Add a Content Editor Web Part (CEWP) on top of the InfoPath Form Web Part. (A blank CEWP would do for this example)Navigate to Page and click Stop EditingClick Add New Item again and navigate to a Rich Text box. Tadah! The Spell Check tab is now enabled!Do the same steps for the (Item) Edit Form to enable Spell Checks when editing an item.This "no code" solution discovered purely by accident!

    Read the article

  • How can I solve this SAT edge case?

    - by ssb
    I have an SAT implementation that basically works, and the fact that it works is what's giving me a few headaches. Basically there are some situations where using the SAT doesn't quite give me my intended result. One of these involves movement across multiple collision objects. Or to put it another way, if I have several collision boxes lined up next to each other such as to create something like a wall or a floor, movement along that surface while constantly applying force into that surface sometimes causes hangups, i.e. the player stops moving. This illustration shows what I mean: The 2 boxes on the bottom represent a floor, and the box on top/in the middle represents what my player is doing. There are several squares lined up as world obstacles to create some kind of wall, and if I move to the left across this surface while holding the down key then the issue arises. It only happens at the exact dividing point between two blocks. It only happens when moving to the left. At any rate I think I know why it happens, but I don't know how to solve it. Basically when I update my player movement I consider which directions are pressed, naturally, so if down is pressed I will add the speed to the Y component, and so on. But due to the way my SAT is implemented, when the penetration into the shape is the same from both sides it just goes with the smallest axis that it finds first, and it checks collisions against objects in the order that they were created because it goes through a foreach loop on the list of collidable objects. So this all adds up to the effect of if I'm moving to the left over a series of boxes while holding down, it will resolve me back to the right out of the first box and then up out of the box to the right of it, and this continues as long as the penetration is the same. The odd part is that this doesn't happen every time, which I am going to attribute to some oddity regarding multiplying velocity by the game time and causing some minor discrepancies between the lengths. Ultimately what this boils down to is that it will keep resolving me to the right and up, but this is technically expected behavior. All the solutions I can think of only address the symptoms of this problem and not the actual cause, such as not using many blocks to create walls or shapes, which is an option I'd like to keep open. I could also change which axis my algorithm defaults to, but that would just cause problems when going up/down along the walls. What can I do to fix this?

    Read the article

  • Ubuntu Dual Screen Using Virtual Machine - AMD GPU

    - by Chris
    I've been searching online and reading tutorials and etc about how to make my ubuntu VM dual screen(x86_64). I have first tried to run these commands: sudo aticonfig --initial -f which gave me the ouput of: sudo: aitconfig: command not found I then googled the output and followed these instructions that I tells me to install my ATI drivers onto my ubuntu. wget http://www2.ati.com/drivers/linux/ati-driver-installer-11-5-x86.x86_64.run sudo sh ati-driver-installer-11-5-x86.x86_64.run --buildpkg Ubuntu/natty sudo dpkg -i *.deb sudo apt-get -f install sudo aticonfig -f --initial --adapter=all sudo reboot It all works well until I input sudo apt-get -f install which gives me the following output: sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 25 not upgraded. 3 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up fglrx (2:8.850-0ubuntu1) ... update-alternatives: error: alternative link /usr/bin/aticonfig is already managed by x86_64-linux-gnu_gl_conf. dpkg: error processing fglrx (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of fglrx-amdcccle: fglrx-amdcccle depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-amdcccle (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-dev: fglrx-dev depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-dev (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: fglrx fglrx-amdcccle fglrx-dev E: Sub-process /usr/bin/dpkg returned an error code (1) At this point, I don't know what to do since running: gksudo amdcccle For the record, I have 3D acceleration turned on. The following is my GPU for my VM lspci | grep VGA 00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter Any Help on how I can make my VM dual screen with Ubuntu would be great. Thank you in advance.

    Read the article

  • Becoming a "maintenance developer"

    - by anon
    So I've kind of been getting angry about the current position I'm in, and I'd love to get other developers' input on this. I've been at my current place of employment for about 11 months now. When I began, I was working on all new features. I basically worked on an entire new web project for the first 5-6 months I was here. After this, I was moved to more of a service oriented role (which was still great, all new stuff for me), and I was in this role for about the past 5-6 months. Here's where the problem comes in. Basically, a couple of days ago I was made the support/maintenance guy. Now, we have an IT support team, so I'm not talking that kind of support, I'm talking more of a second level support guy (when the guys on the surface can't really get to the root of the issue), coupled with working on maintenance issues that have been lingering in the backlog for a while. To me, a developer with about 3 years of experience, this is kind of disheartening. With the type of work place this is, I wouldn't be surprised if these support issues take up most of my days, and I barely make it to working on maintenance issues. Also, most of these support issues aren't even related to code, they are more or less just knowing the system architecture, working with making sure services are running/getting started properly, handling/fixing bad data, etc. I'm a developer, so this part sucks. Also, even when I do have time to work maintenance, these are basically just bug fixes/improving bad code, so this sucks as well, however at least it's related to coding. Am I wrong for getting angry here? I don't want to really complain about it, but to be honest, I wasn't spoken to about this or anything, I was kind of just sent an e-mail letting me know I'm the guy for this type of thing, and that was that. The entire team took a few minutes to give me their "that sucks" talk, because they know how annoying it is to be on support for the type of work we do, so I know I'm not the only guy that knows it's not that great of an opportunity. I'm just kind of on the fence about how to move forward. Obviously I'm just going to continue working for the time being, no point making a bad impression on anybody, but I'd like to know how you guys would approach this situation, or how you think I should be feeling about it/how you guys would feel. Thanks guys.

    Read the article

  • Cannot get Virtualbox to install properly on Ubuntu 12.04

    - by lopac1029
    I cannot get Virtualbox to install properly on my 12.04. I first went with a manual install for the .deb from the old builds section of the Virtualbox page. That .deb opened up the Software Center and installed. Then I got the error coming up of VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot. which I can only assume was due to my Ubuntu version being 32-bit (System Details - Overview - OC type: 32-bit, right?) So I followed these instructions to remove the .deb manually, restarted my laptop, and then FOUND the actual Virtualbox install in the Software Center and installed from that (assuming it would give me the correct version I need for my system) So after all that (and then some), I'm still getting the same error when I connect to my new job's project in Virtualbox. Can anyone point me in the right direction of what to do here? This is the first time I've ever worked with Virtualbox, and no one at this company is using Ubuntu, so I'm on my own here. EDIT: Here is the direct info from running the 2 suggested commands Inspiron-1750-brick:~ $lscpu Architecture: i686 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 10 CPU MHz: 2100.000 BogoMIPS: 4189.45 L1d cache: 32K L1i cache: 32K L2 cache: 2048K Inspiron-1750-brick:~ $cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T6500 @ 2.10GHz stepping : 10 microcode : 0xa07 cpu MHz : 1200.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm bogomips : 4189.80 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T6500 @ 2.10GHz stepping : 10 microcode : 0xa07 cpu MHz : 1200.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm bogomips : 4189.45 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:

    Read the article

  • SharePoint 2010, Cloud, and the Constitution

    - by Michael Van Cleave
    The other evening an article on the Red Tap Chronicles caught my eye. The article written by Bob Sullivan titled "The Constitutional Issues of Cloud Computing" was very interesting in regards to the direction most of the technical world is going. We all have been inundated about utilizing cloud computing for reasons of price, availability, or even scalability; but what Bob brings up is a whole separate view of why a business might not want to move toward the cloud for services or applications. The overall point to the article was pretty simple. It all boiled down to the summation that hosting "Things" in the cloud (Email, Documents, etc…) are interpreted differently under the law regarding constitutional search and seizure than say a document or item that is kept in physical form at a business or home. Where if you physically have it stored someone would have to get a warrant to search for it or seize it, but if it is stored off in the cloud and the ISV or provider is subpoenaed for the item then they will usually give access to the information. Obviously this is a big difference in interpretation of the law and the constitution due to technology. So you might ask "Where does this fit in with SharePoint? Well the overall push for this next version of SharePoint is one that gives a business ultimate flexibility to utilize the Cloud. In one example this upcoming version gracefully lends itself to Multi Tenancy so that online or "Cloud" hosting would be possible by Service Providers. Another aspect to the upcoming version is that it has updated its ability to store content outside of the database and in a cheaper commoditized storage facility. This is called Remote Blob Storage (or RBS) which is the next evolution of External Blob Storage (or EBS). With this new functionality that business might look forward to it is extremely important for them to understand that they might be opening themselves up to laws that do not need a warrant to search or seize their information that is stored in the cloud. It will be interesting to see how this all plays out in the next few months. Usually the laws change slowly in comparison to technology so it might be a while until we see if it is actually constitutional to treat someone's content on the cloud differently as it would be in their possession, however until there is some type of parity that happens or more concrete laws regarding the differences be very careful about what you put in the cloud. Michael

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >