Search Results

Search found 4745 results on 190 pages for 'spare parts'.

Page 91/190 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Oracle IRM video demonstration of seperating duties of document security

    - by Simon Thorpe
    One thing an Information Rights Management technology should do well is separate out three main areas of responsibility.The business process of defining and controlling the classifications to which content is secured and the definition of the roles employees, customers, partners and contractors have when accessing secured content. Allow IT to manage the server and perform the role of authorizing the creation of new classifications to meet business needs but yet once the classification has been created and handed off to the business, IT no longer plays a role on the ongoing management. Empower the business to take ownership of classifications to which their own content is secured. For example an employee who is leading an acquisition project should be responsible for defining who has access to confidential project documents. This person should be able to manage the rights users have in the classification and also be the point of contact for those wishing to gain rights. Oracle IRM has since it's creation in the late 1990's had this core model at the heart of its design. Due in part to the important seperation of rights from the documents themselves, Oracle IRM places the right functionality within the right parts of the business. For example some IRM technologies allow the end user to make decisions about what users can print, edit or save a secured document. This in practice results in a wide variety of content secured with a plethora of options that don't conform to any policy. With Oracle IRM users choose from a list of classifications to which they have been given the ability to secure information against. Their role in the classification was given to them by the business owner of the classification, yet the definition of the role resides within the realm of corporate security who own the overall business classification policies. It is this type of design and philosophy in Oracle IRM that makes it an enterprise solution that works beyond a few users and a few secured documents to hundreds of thousands of users and millions of documents. This following video shows how Oracle IRM 11g, the market leading document security solution, lets the security organization manage and create classifications whilst the business owns and manages them. If you want to experience using Oracle IRM secured content and the effects of different roles users have, why not sign up for our free demonstration.

    Read the article

  • Turn a Green Laser into a Microscope Projector [Science]

    - by ETC
    No need to settle for squinting over a microscope, hack a green laser and a cheap webcam lens into a projector. Check out the video below to see the projector in action. Over at Hackteria, user Dusjagr shares a clever tutorial on how to hack a laser and some odd parts into a projector intended for microscopic subjects. With a strong enough light source, a properly spaced lens assembly, and a slide or petri dish filled with something of interest, you can create microscope projector. Curious how it would work and what the results would look like? Check out this demonstration video: That wiggling creature you see in the video is a nematode from a sample of pond water. For more information on building your own laser-scope projector, hit up the link below. DIY Laser Microscope [Hackteria via Make] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper] N64oid Brings N64 Emulation to Android Devices Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    Read the article

  • How to deal with static utility classes when designing for testability

    - by Benedikt
    We are trying to design our system to be testable and in most parts developed using TDD. Currently we are trying to solve the following problem: In various places it is necessary for us to use static helper methods like ImageIO and URLEncoder (both standard Java API) and various other libraries that consist mostly of static methods (like the Apache Commons libraries). But it is extremely difficult to test those methods that use such static helper classes. I have several ideas for solving this problem: Use a mock framework that can mock static classes (like PowerMock). This may be the simplest solution but somehow feels like giving up. Create instantiable wrapper classes around all those static utilities so they can be injected into the classes that use them. This sounds like a relatively clean solution but I fear we'll end up creating an awful lot of those wrapper classes. Extract every call to these static helper classes into a function that can be overridden and test a subclass of the class I actually want to test. But I keep thinking that this just has to be a problem that many people have to face when doing TDD - so there must already be solutions for this problem. What is the best strategy to keep classes that use these static helpers testable?

    Read the article

  • handling various frame layouts in android

    - by vaibhav
    i'm new to game development and am trying create a Contra or the old tmnt game (but a simple one) like game for android. for the game i decided to divide my main screen in three parts - upper for stats,mid for the game and lower for controls. my main.xml is <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <FrameLayout android:id="@+id/upper_bar" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" > </FrameLayout> <FrameLayout android:id="@+id/fl" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="0.5" > </FrameLayout> <FrameLayout android:id="@+id/low_bar" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="0.85" > </FrameLayout> </LinearLayout> so i have created the gameview and gameloopthread classes for the mid surface(which is pretty standard). my problem is that how do i draw in the upper and lower frame layouts? should i make new classes for view and thread for each layout , should i do all this in the gameview class itself or is there any better way to implement this?

    Read the article

  • XSLT is not the solution you're looking for

    - by Jeff
    I was very relieved to see that Umbraco is ditching XSLT as a rendering mechanism in the forthcoming v5. Thank God for that. After working in this business for a very long time, I can't think of any other technology that has been inappropriately used, time after time, and without any compelling reason.The place I remember seeing it the most was during my time at Insurance.com. We used it, mostly, for two reasons. The first and justifiable reason was that it tweaked data for messaging to the various insurance carriers. While they all shared a "standard" for insurance quoting, they all had their little nuances we had to accommodate, so XSLT made sense. The other thing we used it for was rendering in the interview app. In other words, when we showed you some fancy UI, we'd often ditch the control rendering and straight HTML and use XSLT. I hated it.There just hasn't been a technology hammer that made every problem look like a nail (or however that metaphor goes) the way XSLT has. Imagine my horror the first week at Microsoft, when my team assumed control of the MSDN/TechNet forums, and we saw a mess of XSLT for some parts of it. I don't have to tell you that we ripped that stuff out pretty quickly. I can't even tell you how many performance problems went away as we started to rip it out.XSLT is not your friend. It has a place in the world, but that place is tweaking XML, not rendering UI.

    Read the article

  • MacGyver Moments

    - by Geoff N. Hiten
    Denny Cherry tagged me to write about my best MacGyver Moment.  Usually I ignore blogosphere fluff and just use this space to write what I think is important.  However, #MVP10 just ended and I have a stronger sense of community.  Besides, where else would I mention my second best Macgyver moment was making a BIOS jumper out of a soda can.  Aluminum is conductive and I didn't have any real jumpers lying around. My best moment is probably my entire home computer network.  Every system but one is hand-built, usually cobbled together out of spare parts and 'adapted' from its original purpose. My Primary Domain Controller is a Dell 2300.   The Service Tag indicates it was shipped to the original owner in 1999.  Box has a PERC/1 RAID controller.  I acquired this from a previous employer for $50.  It runs Windows Server 2003 Enterprise Edition.  Does DNS, DHCP, and RADIUS services as a bonus.  RADIUS authentication is used for VPN and Wireless access.  It is nice to sign in once and be done with it. The Secondary Domain Controller is an old desktop.  Dual P-III 933 with some extra drives. My VPN box is a P-II 250 with 384MB of RAM and a 21 GB hard drive.  I did a P-to-V to my Hyper-V box a year or so ago and retired the hardware again.  Dynamic DNS lets me connect no matter how often Comcast shuffles my IP. The Hyper-V box is a desktop system with 8GB RAM and an AMD Athlon 5000+ processor.  Cost me less than $500 to put together nearly two years ago.  I reasoned that if Vista and Windows 2008 were the same code then Vista 64-bit certified meant the drivers for Vista would load into Windows 2008.  Turns out I was right. Later I added three 1TB drives but wasn't too happy with how that turned out.  I recovered two of the drives yesterday and am building an iSCSI storage unit. (Much thanks to Starwind.  Great product).  I am using an old AMD 1.1GhZ box with 1.5 GB RAM (cobbled together from three old PCs) as my storave server.  The Hyper-V box is slated for an OS rebuild to 2008 R2 once I get the storage system worked out.  maybe in a week or two. A couple of DLink Gigabit switches ties everything together. Add in the Vonage box, the three PCs, the Wireless-N Access Point, the two notebooks and the XBox and you have gone from MacGyver to darn near Rube Goldberg. The only thing I really spend money on is power supplies and fans.  I buy top-of-the-line for both. I even pull and crimp my own cables. Oh, and if my kids hose up a PC, I have all of their data on a server elsewhere.  Every PC and laptop is pretty much interchangable for email and basic workstation tasks.  That helps a lot too. Of course I will tag SQLVariant.

    Read the article

  • Welcome to JavaOne!

    - by marius.ciortea
    Welcome to this year's JavaOne conference! We are glad you dropped by. We want to keep you informed of all the happenings around JavaOne: all the events leading up to the conference and all the events during the conference week itself. We'll cover announcements, news, planning (but we won't make you go to any meetings), and snafus (nothing that makes us look too bad, of course). We'll even throw in a contest or two to make sure you are paying attention. We'll post a couple of times a week, and then more frequently as we get closer to September. There's a group of us, and we cover the Java beat, JUGs, Oracle Technology Network, Oracle Solaris, and lots more. What do you want to hear about? Let us know.A group of us from the office went to see the movie Iron Man 2 (it just debuted in the United States) last week and it reminded us of Java, the Java community, and JavaOne. In all three cases, from many disparate (and sometimes seemingly incompatible) parts and people, something comes together that works, is cool, and helps make a better world. Right now, there are hundreds of little islands of planning, all busy answering questions for JavaOne: What sessions get selected? What goes in the Mason street tent (until a few weeks ago, Will there be a tent on Mason street?), What do the JUGS need? Which Oracle ACEs will be there? Can we do a surf theme at the OTN party? And, somehow, like an Iron Man suit, they all come together and work to make a great event. At least, we hope it will be great. That's for you to decide. Please don't be shy--give us your comments and suggestions. We'll be listening.P.S. You can attend Stark Expo online at Oracle.com/ironman2, where you can train to become a "Master Cloud Operative." I got my MCO certification. I wish I had a card to put in my wallet.

    Read the article

  • Developing a long pannable, sprite-animated Windows Store app

    - by Groo
    I am creating my first Windows Store app in XAML, and I cannot seem to find a proper example for the requirements I have (I have spent a couple of days fiddling around, so I apologize if I missed something obvious). Basic idea of the app is to have a large scrollable canvas which would lazily start animating visible parts of the view as soon as user stops panning over a certain content (with some audio played also): My original idea was to use a StackPanel to add a bunch of custom controls, each of which would then animate itself once visible (with a short delay), but I have a couple of concerns: If the entire canvas is ~50 screen widths wide, is it feasible to load all content at the beginning, or do I need to plan doing some lazy loading during scrolling? For example, when I select a certain region in the Bing Travel app, it seems to lazily load tiles as I scroll it towards the end. Since content is stretched 100% vertically, and these animations are vectorized to be resolution independent, I am not sure if XAML (CompositionTarget) will be able to handle this, or I have to go for DirectX (MonoGame or C++) to get rid of flicker. Even better, is there an example for Windows 8 which uses a 100% vertically sized GridView with custom animated controls inside?

    Read the article

  • Improving SpriteBatch performance for tiles

    - by Richard Rast
    I realize this is a variation on what has got to be a common question, but after reading several (good answers) I'm no closer to a solution here. So here's my situation: I'm making a 2D game which has (among some other things) a tiled world, and so, drawing this world implies drawing a jillion tiles each frame (depending on resolution: it's roughly a 64x32 tile with some transparency). Now I want the user to be able to maximize the game (or fullscreen mode, actually, as its a bit more efficient) and instead of scaling textures (bleagh) this will just allow lots and lots of tiles to be shown at once. Which is great! But it turns out this makes upward of 2000 tiles on the screen each time, and this is framerate-limiting (I've commented out enough other parts of the game to make sure this is the bottleneck). It gets worse if I use multiple source rectangles on the same texture (I use a tilesheet; I believe changing textures entirely makes things worse), or if you tint the tiles, or whatever. So, the general question is this: What are some general methods for improving the drawing of thousands of repetitive sprites? Answers pertaining to XNA's SpriteBatch would be helpful but I'm equally happy with general theory. Also, any tricks pertaining to this situation in particular (drawing a tiled world efficiently) are also welcome. I really do want to draw all of them, though, and I need the SpriteMode.BackToFront to be active, because

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • How to factor out data layer in nopCommerce and replace MS SQL with RavenDB?

    - by Kaveh Shahbazian
    I am new to nopCommerce and ecommerce in general but I am involved in an ecommerce project. Now from my past experiences with RavenDB (which mostly were absolutely pleasant) and based on the needs of the business (fast changes with awkward business workflows) It seemed to be an appealing option to have RavenDB handling all sort of things related to the database. I do not understand design and architecture of nopCommerce fully so I did not reach to a conclusion on how to factor data parts, since it seems the services layer actually does not abstract data-layer concepts away; like bringing in EF working model to other layers. I have found another project which used NuDB as it's database as a nopCommerce fork. But it did not help because NuDB still has the feeling of a RDBMS and is not as different as RavenDB. Now first how can I learn about the internals of nopCommerce (other than investigating the code)? It's workflows? It's conventions? Second has anyone tried something similar before with a NoSQL database (say like MongoDB or RavenDB)? Is it possible to achieve this in a 1 (~2) month time frame? Thanks in advance;

    Read the article

  • Open Source Security packages for Rails

    - by Edwin
    I'm currently creating a complete web application using Rails 3 to familiarize myself with its inner workings and to gain a better appreciation of a working web application's moving parts. (Plus, since I'm still working on my degree, I hope that it will give me a better idea of what's BS in my education requirements and which weaknesses/skills I should focus on.) The example application I'm working on is an ecommerce site, and I've already configured the backend, routes, controllers, and so on. As part of the application, I'd like to integrate a second layer of security on top of the one Rails already provides for user authentication. However, I've been unable to find any on Google, with the exception of OAuth - which, from my understanding, is meant to secure API calls. While I could roll my own secure authentication system, I'm only in my second year of college and recognize that A) I know little about security, and B) there are developers that know much more about security that are working on open-source projects. What are some actively developed open-source security packages or frameworks that can be easily added to Rails? Pros and cons are not necessary, as I can do the research myself. P.S. I'm not sure whether I posted this in the right SE site; please migrate to SO or Security if it is more appropriate there.

    Read the article

  • Implementation details of database synchronisation API

    - by Daniel
    I want to achieve a database synchronisation between my server database and a client application. The server would run MySQL and the applications may run different database technologies, their implementation isn't important. I have a MySQL database online and web accessible via an API I wrote in PHP (just a detail). My client application ships with a copy of the online data. As time passes my goal is to check for any changes in the online database and make these updates available to the client app via an API call, by sending a date to an API endpoint corresponding to the last date the app was updated, the response would be a JSON filled with all new objects and updated objects, and delete IDs, this makes possible to update the local store appropriately. Essentially I want to do this: http://dbconvert.com/synchronization.php My question is about the implementation details. Would I need to add a column to my database tables with a "last modified" date? Since the client app could be very out of date if it's been offline for a long time, does that also mean I shouldn't delete data from the online database but instead have another column called "delete" set to 1 and a modified date updated appropriately? Would my SQL query simply check for all data with a modified date superior then the date passed into the API request by the client? I feel like there's a lot more to it then having a ton of dates everywhere. And also, worry that I will need to persist a lot of old data in order to ensure that old versions of the client app always have the opportunity to delete parts of their data when they are able to sync.

    Read the article

  • Fix invalid objects and components - BEFORE you upgrade!

    - by Mike Dietrich
    We are currently running a Tech Challange Workshop with 25 Oracle consultants and support folks from all over EMEA. We call it Tech Challange because we seperate these experts having between 5 and 20 years of Oracle experience into 5 groups - and each group has to complete their special challange such as moving a database from 10.2 to Exadata V2 or upgrading from single instance 10.2 to Real Application Clusters 11.2 with the new Grid Infrastructure. Actually we start this training with a bit presentation pieces about upgrades, Real Application Testing and Golden Gate. And one topic I always point out: Keep your database tidy before the upgrade!!! Clean up all invalid objects - especially in SYS and SYSTEM user schema BEFORE you upgrade. Use utlrp.sql to recompile invalid objects. Use Note:753041.1 to diagnose and fix invalid components. Do this always BEFORE you start the upgrade. Even if it may take some time. Otherwise your upgrade could fails or significant parts of the database packages could be invalid after the upgrade as well. I just came across this today as one group had ~240 invalid objects in the database - and due to the fact that the original system was still there could proof that the objects had been invalid before. Good job, BUT ... :-)

    Read the article

  • Manager keeps changing requirement specification after every demo.

    - by Jungle Hunter
    Background of my working environment My manager has no background or understanding of computers or software whatsoever. It is highly likely he hasn't seen code in any form (not even from a physical distance of 10 feet or less) in his life. There is no one who understands the complexity of what I am asked to implement. To the point that if I semi-hardcode no one would know. On Joel's test we score an unbelievable score 0. The problems The manager and at times other "senior" keep changing the requirement specification. Changes which, if good engineering be done and not patchy "fixes", require change in the underlying design. There is absolutely no one who looks at code (probably because no one knows how to, or even if it should be done) which means no one will ever be able to: Appreciate the complexity of the problem or the elegance of the solution. Suggest improvement to the approach. Appreciate the quality of the code. Point out where the code can be improved. A lot of jargon is used which makes sense grammatically but fails to make any sense any other way. Doesn't feel, behave or work like a software company. The question What should be done? Especially regarding there being no one who would point out improvements in my code. Update To answer HLGEM's (and possibly others) question about what I've done to try and fix it. I offered to set up Redmine and introduce source control to everyone. I said I would recommend distributed (git or mercurial) but will also talk about centralized ones and let the team decide. Response was that things are being done and will be done within weeks. Haven't seen that nor am I aware if other parts of the company use it.

    Read the article

  • Geek Bike Ride Sao Paulo

    - by Tori Wieldt
    What do you do on sunny Saturday in Sao Paulo when you have several Java enthusiasts, street lanes closed off for bicyclists, new cool Duke jerseys, and some wonderful bike angels to provide a tour through the city? A GEEK BIKE RIDE, of course! The weekend before JavaOne Latin America, the Sao Paulo geek bike ride was held today. We had 20+ riders and a wonderful route that took us from the Bicycle Park to and through downtown. It was a 30Km ride, but our hosts were kind enough to give riders the option to take the subway for part of the trip. Thanks to our wonderful bike angels, the usual rental bike problems like rubbing brakes, dropped chains, and even a flat tire were handled with ease.  The geek bike ride wasn't just for out-of-towners. Loiane Groner, who lives in Sao Paulo said, "I love the Geek Bike Ride! The last time I was in these parts of the city, I think I was five years-old!" A good time was had by all. (My only crash of the day was riding up an escalator with my bike. Luckily, the bikers with me were so busy helping me that no pictures were taken. <phew>) Enjoy this video by Hugo Lavalle You can also view Hugo's pictures. More pictures to come on Stephen Chin's blog.  So, what city is up next?  

    Read the article

  • Creating a bootable CD based on Ubuntu Server

    - by 0xC0000022L
    Note: bootable here refers to an Installation CD, not to the El Torito bootable CD standard if narrowly construed, or to a Live CD if widely construed. What tools exist to create a bootable CD based on Ubuntu 12.04? Unlike the Live CD used for the Desktop edition, the Server edition doesn't use Casper and that's exactly what I want. I.e. this question is not about a live CD! I have read InstallCDCustomization, but that only covers preseeding, adding modules etc. What I would like to achieve is rather to build a bootable CD from scratch, preferably based on the kernel of my running system, the bash and other binaries from that running system. I know how to preseed my own installation CD, so I'm comfortable with the tools involved there. However, that skips important parts such as creating the directory structure that is expected on a bootable CD. And that's what I'm looking for. I guess the question could be summed up as: what tools are the Ubuntu build masters using to author the alternate and server installation CDs and where can I find documentation for these? I would prefer doing this on the terminal (because that's how I run the Ubuntu Server installations themselves). But if I need a second machine with GUI to do it, I can certainly live with that.

    Read the article

  • Encrypting a non-linux partition with LUKS.

    - by linuxn00b
    I have a non-Linux partition I want to encrypt with LUKS. The goal is to be able to store it by itself on a device without Linux and access it from the device when needed with an Ubuntu Live CD. I know LUKS can't encrypt partitions in place, so I created another, unformatted partition of the EXACT same size (using GParted's "Round to MiB" option) and ran this command: sudo cryptsetup luksFormat /dev/xxx Where xxx is the partition's device name. Then I typed in my new passphrase and confirmed it. Oddly, the command exited immediately after, so I guess it doesn't encrypt the entire partition right away? Anyway, then I ran this command: sudo cryptsetup luksOpen /dev/xxx xxx Then I tried copying the contents of the existing partition (call it yyy) to the encrypted one like this: sudo dd if=/dev/yyy of=/dev/mapper/xxx bs=1MB and it ran for a while, but exited with this: dd: writing `/dev/mapper/xxx': No space left on device just before writing the last MB. I take this to mean the contents of yyy was truncated when it was copied to xxx, because I have dd'd it before, and whenever I have dd'd to a partition of the exact same size, I never get that error. (and fdisk reports they are the same size in blocks). After a little Googling I discovered all luksFormat'ted partitions have a custom header followed by the encrypted contents. So it appears I need to create a partition exactly the size of the old one + however many bytes a LUKS header is. What size should the destination partition be, no. 1, and no. 2, am I even on the right track here? UPDATE I found this in the LUKS FAQ: I think this is overly complicated. Is there an alternative? Yes, you can use plain dm-crypt. It does not allow multiple passphrases, but on the plus side, it has zero on disk description and if you overwrite some part of a plain dm-crypt partition, exactly the overwritten parts are lost (rounded up to sector borders). So perhaps I shouldn't be using LUKS at all?

    Read the article

  • Code Generation and IDE vs writing per Hand

    - by sytycs
    I have been programming for about a year now. Pretty soon I realized that I need a great Tool for writing code and learned Vim. I was happy with C and Ruby and never liked the idea of an IDE. Which was encouraged by a lot of reading about programming.[1] However I started with (my first) Java Project. In a CS Course we were using Visual Paradigm and encouraged to let the program generate our code from a class diagram. I did not like that Idea because: Our class diagram was buggy. Students more experienced in Java said they would write the code per hand. I had never written any Java before and would not understand a lot of the generated code. So I took a different approach and wrote all methods per Hand (getter and Setter included). My Team-members have written their parts (partly generated by VP) in an IDE and I was "forced" to use it too. I realized they had generated equal amounts of code in a shorter amount of time and did not spend a lot of time setting their CLASSPATH and writing scripts for compiling that son of a b***. Additionally we had to implement a GUI and I dont see how we could have done that in a sane matter in Vim. So here is my Problem: I fell in love with Vim and the Unix way. But it looks like for getting this job done (on time) the IDE/Code generation approach is superior. Do you have equal experiences? Is Java by the nature of the language just more suitable for an IDE/Code generated approach? Or am I lacking the knowledge to produce equal amounts of code "per Hand"? [1] http://heather.cs.ucdavis.edu/~matloff/eclipse.html

    Read the article

  • Order of operations to render VBO to FBO texture and then rendering FBO texture full quad

    - by cyberdemon
    I've just started using OpenGL with C# via the OpenTK library. I've managed to successfully render my game world using VBOs. I now want to create a pixellated affect by rendering the frame to an offscreen FBO with a size half of my GameWindow size and then render that FBO to a full screen quad. I've been looking at the OpenTK example here: http://www.opentk.com/doc/graphics/frame-buffer-objects ...but the result is a black form. I'm not sure which parts of the example code belongs in the OnLoad event and OnRenderFrame. Can someone please tell me if the below code shows the correct order of operations? OnLoad { // VBO. // DataArrayBuffer GenBuffers/BindBuffer/BufferData // ElementArrayBuffer GenBuffers/BindBuffer/BufferData // ColourArrayBuffer GenBuffers/BindBuffer/BufferData // FBO. // ColourTexture GenTextures/BindTexture/TexParameterx4/TexImage2D // Create FBO. // Textures Ext.GenFramebuffers/Ext.BindFramebuffer/Ext.FramebufferTexture2D/Ext.FramebufferRenderbuffer } OnRenderFrame { // Use FBO buffer. Ext.BindFramebuffer(FBO) GL.Clear // Set viewport to FBO dimensions. GL.DrawBuffer((DrawBufferMode)FramebufferAttachment.ColorAttachment0Ext) // Bind VBO arrays. GL.BindBuffer(ColourArrayBuffer) GL.ColorPointer GL.EnableClientState(ColorArray) GL.BindBuffer(DataArrayBuffer) // If world changed GL.BufferData(DataArrayBuffer) GL.VertexPointer GL.EnableClientState(VertexArray) GL.BindBuffer(ElementArrayBuffer) // Render VBO. GL.DrawElements // Bind visible buffer. GL.Ext.BindFramebuffer(0) GL.DrawBuffer(Back) GL.Clear // Set camera to view texture. GL.BindTexture(ColourTexture) // Render FBO texture GL.Begin(Quads) // Draw texture on quad // TexCoord2/Vertex2 GL.End SwapBuffers }

    Read the article

  • How much detail is in a good UI regression test?

    - by GlenPeterson
    We use a detailed step-by-step user-interface regression test for our commercial web application. It has a "backbone" test for the most used / most important parts of the system, with optional tests for specific areas of functionality. Using this plan has definitely helped us ensure high quality software. But, having very specific tests can be counter-productive. The tester concentrates on following the test and will completely miss usability issues, or not notice fairly obvious problems such as the bottom part of a page that is missing. By contrast, some of the best UI testing happens when building a demo of a new feature. I often do my own best testing by pretending to demonstrate the system to an imaginary prospect. Yet when I tell the testers, "Just demonstrate the system to yourself" they don't cover nearly as much functionality as they do with a detailed point-by-point test. I'm repeatedly asked to provide more and more detail in the test plan so that a new untrained tester can test with it without asking any questions. Yet details seem to be counter-productive. How much detail do you put in a regression test to make it effective? What techniques make the tester to focus more on the system than on checking off items on the test?

    Read the article

  • Low level programming - what's in it for me?

    - by back2dos
    For years I have considered digging into what I consider "low level" languages. For me this means C and assembly. However I had no time for this yet, nor has it EVER been neccessary. Now because I don't see any neccessity arising, I feel like I should either just schedule some point in time when I will study the subject or drop the plan forever. My Position For the past 4 years I have focused on "web technologies", which may change, and I am an application developer, which is unlikely to change. In application development, I think usability is the most important thing. You write applications to be "consumed" by users. The more usable those applications are, the more value you have produced. In order to achieve good usability, I believe the following things are viable Good design: Well-thought-out features accessible through a well-thought-out user interface. Correctness: The best design isn't worth anything, if not implemented correctly. Flexibility: An application A should constantly evolve, so that its users need not switch to a different application B, that has new features, that A could implement. Applications addressing the same problem should not differ in features but in philosophy. Performance: Performance contributes to a good user experience. An application is ideally always responsive and performs its tasks reasonably fast (based on their frequency). The value of performance optimization beyond the point where it is noticeable by the user is questionable. I think low level programming is not going to help me with that, except for performance. But writing a whole app in a low level language for the sake of performance is premature optimization to me. My Question What could low level programming teach me, what other languages wouldn't teach me? Am I missing something, or is it just a skill, that is of very little use for application development? Please understand, that I am not questioning the value of C and assembly. It's just that in my everyday life, I am quite happy that all the intricacies of that world are abstracted away and managed for me (mostly by layers written in C/C++ and assembly themselves). I just don't see any concepts, that could be new to me, only details I would have to stuff my head with. So what's in it for me? My Conclusion Thanks to everyone for their answers. I must say, nobody really surprised me, but at least now I am quite sure I will drop this area of interest until any need for it arises. To my understanding, writing assembly these days for processors as they are in use in today's CPUs is not only unneccesarily complicated, but risks to result in poorer runtime performance than a C counterpart. Optimizing by hand is nearly impossible due to OOE, while you do not get all kinds of optimizations a compiler can do automatically. Also, the code is either portable, because it uses a small subset of available commands, or it is optimized, but then it probably works on one architecture only. Writing C is not nearly as neccessary anymore, as it was in the past. If I were to write an application in C, I would just as much use tested and established libraries and frameworks, that would spare me implementing string copy routines, sorting algorithms and other kind of stuff serving as exercise at university. My own code would execute faster at the cost of type safety. I am neither keen on reeinventing the wheel in the course of normal app development, nor trying to debug by looking at core dumps :D I am currently experimenting with languages and interpreters, so if there is anything I would like to publish, I suppose I'd port a working concept to C, although C++ might just as well do the trick. Again, thanks to everyone for your answers and your insight.

    Read the article

  • SQLBeat Podcast – Episode 4 – Mark Rasmussen on Machine Guns,Jelly Fish and SQL Storage Engine

    - by SQLBeat
    In this this 4th SQLBeat Podcast I talk with fellow Dane Mark Rasmussen on SQL, machine guns and jelly fish fights; apparently they are common in our homeland. Who am I kidding, I am not Danish, but I try to be in this podcast. Also, we exchange knowledge on SQL Server storage engine particulars as well as some other “internals” like password hashes and contained databases. And then it just gets weird and awesome. There is lots of background noise from people who did not realize we were recording. And I call them out and make fun of them as they deserve; well just one person who is well known in these parts. I also learn the correct (almost) pronunciation of “fjord”. Seriously, a word with an “F” followed by a “J”. And there are always the hippies and hipsters to discuss. Should be fun.

    Read the article

  • Unable to access ubuntu through windows boot manager

    - by Arvind
    I am having an ASUS K55VM....it came with DOS....to this Windows 7 was installed...now i am planned to add another OS ie Ubuntu...my computer had 4 partitions of 240GB 230GB 230GB and 230GB....i changed this to 5 parts....243GB 230GB 230GB 150GB and an 80GB...i installed Ubuntu 12.04 to the 80GB partition through a USB flash drive....it installed fine...however the problem is during the Boot option in Windows Boot manager when i choose Ubuntu...this what is displayed... Windows failed to start. A recent hardware or software change might be the cause. To fix the problem: 1.Insert your Windows installation disc and restart your computer. 2.Choose your language settings, and then click "Next." 3.Click "Repair your computer." If you do not have this disc, contact your system administrator or computer manufacturer for assistance. File: \ubuntu\winboot\wubildr.mbr Status: 0xc000000e Info: The selected entry could not be loaded because the application is missing or corrupt. I tried to change the boot option for the OS and made Ubuntu as my first option....It worked...however windows 7 does not show up in GRUB....in this fear i did the following: Restarted my pc changed boot option 1 to windows. Formatted the drive to which Ubuntu was installed(80GB one)through windows. Restarted again. The problem persists....Windows boot manager shows Ubuntu still and when i choose this it gives two lines one like: grub rescue....but my windows 7 is working perfectly fine. Now want a solution to remove Ubuntu from the windows boot option and reinstall completely...

    Read the article

  • how do I write a functional specification quickly and efficiently

    - by giddy
    So I just read some fabulous articles by Joel on specs here. (Was written in 2000!!) I read all 4 parts, but Im looking for some methodical approaches to writing my specs. Im the only lonely dev, working on this fairly complicated app (or family of apps) for a very well known finance company. I've never made something this serious, I started out writing something like a bad spec, an overview of some sorts, and it has wasted a LOT of my time. Ive also made 3 mockup-kinda-thingies for my client so I have a good understanding of what they want. Also released a preview (a throw away working app with the most basic workflow), and Ive only written and tested some of the very core/base systems. I think the mistake Ive been making so far is not writing a detailed spec, so Im getting to it now. So the whole thing comprises of An MVC website (for admins & data viewing) 2 Silverlight modules (For 2 specific tasks) 1 Desktop Application Im totally short on time, resources and need to get this done quick, also, need to make sure these guys read it up equally quick and painlessly. So how do I go about it, Im looking for any tips, any real world stuff, how do you guys usually do it? Do you make a mock screenie of every dialog/form/page? Im thinking of making a dummy asp.net web forms project, then filling in html files in folders and making it look like my mvc url structure. Then having a section in the spec for the website and write up a page for every URL Ive got with a screenie. For my win forms app, Ive made somewhat of a demo Win Form project, would I then put in a dialog or stucture everything as I would in the real app and then screen shot it?

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >