Search Results

Search found 12784 results on 512 pages for 'little jeans'.

Page 222/512 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Procedural content (settlement) generation

    - by instancedName
    I have, lets say, something like a homework or assignment to do. Roughly said I need to write an algorithm (pseudo code is not necessary, just in depth description) of procedure that would generate settlements, environment and a people to populate it with, as part of some larger world generation procedure. The genre of game is not specified, it could be any genre (rpg, strategy, colony simulation etc.) where interacting with large and extensive world is central to the game. Procedure should be called once per settlement. At the time of calling, world generation procedure makes geography, culture and history input available. Output should be map of the village and it's immediate area, and various potential additional information like myths, history, demographic facts etc. Bonus would be quest ant similar stuff, but that not really my focus at the moment. I will leave quality of the output for later when I actually dig little deeper into this topic. I am free to change parameters as long as I have strong explanation for doing so. Setting of the game is undetermined so I am free to use anything that I like the most. Ok, so my actual question is: Can anyone who has some experience in this field of game design recommend me some good literature, or point me in the direction where I should look/reed/study? I'm somewhat experienced game programmer, but I've never been into game design till now so any help will be great. I want to do this assignment as good as I can. As for deadline, it's not strictly set, but lets say I don't want it to take longer then few weeks, one month at worst case.

    Read the article

  • Why are embedded device apps still written in C/C++? Why not Java programming language?

    - by hinkmond
    At the recent Black Hat 2014 conference in Sin City, the Black Hatters were focusing on Embedded Devices and IoT. You know? Make your networked-toaster burn your bread 10,000 miles away, over the Web for grins and giggles. Well, apparently the Black Hatters say it can be done pretty easily these days, which is scary. See: Securing Embedded Devices & IoT Here's a quote: All these devices are still written in C and C++. The challenges associated with developing securely in these languages have been fought for nearly two decades. "You often hear people say, 'Well, why don't we just get rid of the C and C++ language if it's so problematic. Why don't we just write everything in C# or Java, or something that is a little safer to develop in?'," DeMott says. Gah! Why are all these IoT devices still using C/C++? Of course they should be using Java SE Embedded technology! It's a natural fit to use for better security on embedded devices. Or, I guess, developers really don't mind if their networked-toasters do char their breakfast. If it can be burned, it will be... That's what I say. Unless they use Java. Hinkmond

    Read the article

  • Why is purchasing Microsoft licences such a daunting task? [closed]

    - by John Nevermore
    I've spent 2 frustrating days jumping through hoops and browsing through different local e-shops for VS (Visual Studio) 2010 Pro. And WHS (Windows Home Server) FPP 2011 licenses. I found jack .. - or to be more precise, the closest I found in my country was WHS OEM 2011 licenses after multiple emails sent to individuals found on Microsoft partners page. Question being, why is it so difficult to get your hands on Microsoft licenses as an individual? Sure, you can get the latest end user operating systems from most shops, but when it comes to development tools or server software you are left dry. And companies that do sell licenses most of the time don't even put up pricing or a self service environment for buying the licenses, you need to have an hawk's eye for that shiny little Microsoft partner logo and spam through bunch of emails not knowing, if you can count on them to get the license or not. Sure, i could whip out my credit card and buy the VS 2010 license on the online Microsoft Shop. Well whippideegoddamndoo, they sell that, but they don't sell WHS 11 licenses. Why does a company make it so hard to buy their products? Let's not even talk about the licensing itself being a pain.

    Read the article

  • Collision and Graphics integration

    - by Shlomi Atia
    I'm a little confused about the integration between collision and graphics. They both need to share the same position in the world. The most obvious choice is the center of the entity, which is good for bounding volumes and fixed sized sprites. However, for characters with variable height size sprites like this: http://gamemedia.wcgame.ru/data/2011-07-17/game-sprite-sheet.jpg This is no longer good. The character won't align to the ground if I'll draw it from the center. I can just make the sprites the same height, but it will be a waste of memory (the largest sprite is 4 times larger then the smallest one). Even then, this is not an option at all with skeletal sprites like this one: http://user-generated-content.java-gaming.org/img-vault/212a171fc1ebb27ab77608fb9b2dd9bd9205361ce6300b21a7f8d06d025fbbd8.png It seems that the graphics need to be drawn from the ground for characters, but not for other images such as scenery and obstacles. The only solution I could think of was having another position called draw-position, which is the entity center for images, and is the the bottom of the collision volume for characters. Then when I draw relative to that position, it should work properly. I haven't found any references for something like that, so I'm kinda insecure about it. Does anyone knows of a better approach for this problem? Thanks

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • After upgrading to trusty, ALSA midi connection (aconnect) doesn't seem to work right

    - by SougonNaTakumi
    Previously in kubuntu 13.10 I was able to open vmpk or plug in a midi keyboard, and provided that TiMidity was running in server mode, I could run aconnect [keyboard port (129:0 for vmpk)] 14:0 aconnect 14:0 128:0 and I could play the keyboard and get sound. But now, a while after upgrading to trusty, I tried to do that, and didn't get any sound. TiMidity itself still plays files fine, but if I try to play them with aplaymidi, I still just get silence. Oddly, the midi files are clearly being read. When I ran (where 130:0 was vmpk's input port) aplaymidi -p 130:0 ~/path/to/midi.mid vmpk was highlighting notes on the piano as if it were playing the midi. One time I tried this, TiMidity (?) very briefly played a fraction of a second of the first chord of my song before everything went silent and vmpk just highlighted the first voice on the keyboard as usual. Now the weirdest part of this is that probably about 40% of the time, when I've played at least one note with either aplaymidi or vmpk, when I run aconnect -x I get a sudden burst of a note or chord from my speakers (that is, if I played one note, I get a note; if I played multiple sequential notes, they turn into a chord), as if the notes were being queued up but not being played and that somehow liberated them. I have no idea what's going on there. A little while ago I remember having a problem with Audacity playing wav files sped up and also locking up if I tried to pause it, which it stopped doing when I set the audio devices to the actual audio devices rather than pulse. But now when I checked again, it's doing the opposite: it won't play audio at all and/or acts weirdly if I don't set the audio devices to pulse, and either way will very occasionally randomly do the speeding up thing regardless. Oddly in the midst of what's looking like a pretty screwed up sound system, sound in VLC and Firefox has been working fine and if I play a wav file with aplay ~/path/to/sound.wav that works fine too. Any idea what I could do to figure out what's wrong with ALSA and/or fix it?

    Read the article

  • Is creating a separate pool for each individual png image in the same class appropriate?

    - by Panzercrisis
    I'm still possibly a little green about object-pooling, and I want to make sure something like this is a sound design pattern before really embarking upon it. Take the following code (which uses the Starling framework in ActionScript 3): [Embed(source = "/../assets/images/game/misc/red_door.png")] private const RED_DOOR:Class; private const RED_DOOR_TEXTURE:Texture = Texture.fromBitmap(new RED_DOOR()); private const m_vRedDoorPool:Vector.<Image> = new Vector.<Image>(50, true); . . . public function produceRedDoor():Image { // get a Red Door image } public function retireRedDoor(pImage:Image):void { // retire a Red Door Image } Except that there are four colors: red, green, blue, and yellow. So now we have a separate pool for each color, a separate produce function for each color, and a separate retire function for each color. Additionally there are several items in the game that follow this 4-color pattern, so for each of them, we have four pools, four produce functions, and four retire functions. There are more colors involved in the images themselves than just their predominant one, so trying to throw all the doors, for instance, in a single pool, and then changing their color properties around isn't going to work. Also the nonexistence of the static keyword is due to its slowness in AS3. Is this the right way to do things?

    Read the article

  • Best scripting language for project [on hold]

    - by Dave
    This is a subjective question, but I don't know where else to ask it. I'd appreciate it if someone could direct me to an appropriate scripting language for my project. I'm a little new at this so I'd appreciate any help. The project is a website that will display a list of photo subject groups (such as "nature" "people" "sports" etc) on the home page. The photos will all be in subdirectories of the main photo directory (photos) and each subject group will represent a subdirectory in photos. For example in directory photos there might be 3 subdirectories, "nature" "people" "sports" and in each of those subdirectories there will be the actual photos. The idea is that when the website owner wants to update/add/delete a subject group all he has to do is add, delete or update a subdirectory of the photos directory. This means, I think, that I need a scripting language that can read the directories and files in the website and then send a web page with the information in it. What is the simplest and easiest scripting language to do this in? Any ideas? Thanks

    Read the article

  • Deleting Unused Swaps Partions

    - by Nikita Kononov
    Good evening everyone , I got a little issue with Swap Partitions. Due to some issues after installing Ubuntu first time, I reinstalled it and now I have 3 Swaps. Here is sudo fdisk -l result Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xaa9693fe Device Boot Start End Blocks Id System /dev/sda1 2048 52430847 26214400 1c Hidden W95 FAT32 (LBA) /dev/sda2 * 52430848 540677076 244123114+ 7 HPFS/NTFS/exFAT /dev/sda3 540678142 1465147391 462234625 5 Extended Partition 3 does not start on physical sector boundary. /dev/sda5 1452750848 1465147391 6198272 82 Linux swap / Solaris /dev/sda6 1440352256 1452742655 6195200 82 Linux swap / Solaris /dev/sda7 540678144 1427951615 443636736 83 Linux /dev/sda8 1427953664 1440339967 6193152 82 Linux swap / Solaris So Swaps in /dev/sda5 and /dev/sda6 are no longer in use as far as I understand and thus I was planning to delete them, however faced a problem. What I did is download and burn Gparted Live CD and boot it up, tried to delete those partitions but I have no idea how to add 12GB unallocated memory to the existing OS partition in this case to /dev/sda7 Is there anyway I can delete 2 swaps and extend unallocated memory to /dev/sda7 partion? Thank you in advance!

    Read the article

  • Process Improvement and the Data Professional

    - by BuckWoody
    Don’t be afraid of that title – I’m not talking about Six Sigma or anything super-formal here. In many organizations, there are more folks in other IT roles than in the Data Professional area. In other words, there are more developers, system administrators and so on than there are the “DBA” role. That means we often have more to do than the time we need to do it. And, oddly enough, the first thing that is sacrificed is process improvement – the little things we need to do to make the day go faster in the first place. Then we get even more behind, the work piles up and…well, you know all about that. Earlier I challenged you to find 10-30 minutes a day to study. Some folks wrote back and asked “where do I start”? Well, why not be super-efficient and combine that time with learning how to make yourself more efficient? Try out a new scripting language, learn a new tool that automates things or find out ways others have automated their systems. In general, find out what you’re doing and how, and then see if that can be improved. It’s kind of like doing a performance tuning gig on yourself! If you’re pressed for time, look for bite-sized articles (like the ones I’ve done here for PowerShell and SQL Server) that you can follow in a “serial” fashion. In a short time you’ll have a new set of knowledge you can use to make your day faster. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Using Telerik MVC with your own custom jQuery and or other plug-ins

    - by Steve Clements
    If you are using MVC it might be worth checking out the telerik controls (http://demos.telerik.com/aspnet-mvc), they are free if you are doing an internal or “not for profit” application. If however you do choose to use them, you could come up against a little problem I had.  Using the telerik controls with your own custom jQuery.  In my case I was using the jQuery UI dialog. It kept throwing an error where I was setting my div to a dialog. Code Snippet $("#textdialog").dialog({ The problem is when you use the telerik mvc stuff you need to call ScriptRegistrar Code Snippet @Html.Telerik().ScriptRegistrar() in order to setup the javascript for the controls. By default this adds a reference to jQuery and if you have already added a reference to jQuery because you are using it elsewhere, this causes a problem. I found the solution here And it was to change the above ScriptRegistrar call to this… Code Snippet @Html.Telerik().ScriptRegistrar().jQuery(false).DefaultGroup(g => g.Combined(true).Compress(true));   If you come across this one on stackoverflow it wont work – in my case the HtmlEditor would render no problem, but was unusable.  Which is the same as someone else found when using the tab control – they went to the bother of re-writing the ScriptRegistrar.  Not for me that one!!

    Read the article

  • How does font rendering actually work?

    - by Andrea
    I realize that I know essentially nothing about the way fonts get rendered in my computer. From what I can observe, font rendering is generally made in a consistent way throughout the system. For instance, the subpixel font hinting settings that I configure in my DE control panel have influence on text which appears on window borders, in my browser, in my text editor and so on. (I should observe that some Java applications show a noticeable difference, so I guess they are using a different font rendering mechanism). What I get from the above is that probably all applications that need font rendering make use of some OS (or DE)-wide library. On the other hand, browsers usually manage their own rendering through a rendering engine, that takes care of positioning various items - including text - according to specific flow rules. I am not sure how these two facts are compatible. I would assume that the browser would have to ask the OS to draw a glyph at a given position, but how can it manage the flow of text without knowing beforehand how much space the glyph will take? Are there separate calls to determine the glyph sizes, so that the browser can manage the flow as if characters were little boxes that are later filled in by the OS? (Although this does not take care of kerning). Or is the OS responsible for drawing a whole text area, including text flow? Does the OS return the rendered glyph as a bitmap and leaves it to the application to draw that on the screen?

    Read the article

  • Help with converting an XML into a 2D level (Actionscript 3.0)

    - by inzombiak
    I'm making a little platformer and wanted to use Ogmo to create my level. I've gotten everything to work except the level that my code generates is not the same as what I see in Ogmo. I've checked the array and it fits with the level in Ogmo, but when I loop through it with my code I get the wrong thing. I've included my code for creating the level as well as an image of what I get and what I'm supposed to get. EDIT: I tried to add it, but I couldn't get it to display properly Also, if any of you know of better level editors please let me know. xmlLoader.addEventListener(Event.COMPLETE, LoadXML); xmlLoader.load(new URLRequest("Level1.oel")); function LoadXML(e:Event):void { levelXML = new XML(e.target.data); xmlFilter = levelXML.* for each (var levelTest:XML in levelXML.*) { crack = levelTest; } levelArray = crack.split(''); trace(levelArray); count = 0; for(i = 0; i <= 23; i++) { for(j = 0; j <= 35; j++) { if(levelArray[i*36+j] == 1) { block = new Platform; s.addChild(block); block.x = j*20; block.y = i*20; count++; trace(i); trace(block.x); trace(j); trace(block.y); } } } trace(count);

    Read the article

  • Windows 8: SL and HTML

    - by xamlnotes
    I  was just pointed to comment on my friend Andrew Brust’s blog about Silverlight versus HTML 5. Andrews blog is here: http://geekswithblogs.net/andrewbrust/archive/2011/11/23/windows-8-will-be-here-tomorrow-but-should-silverlight-be.aspx#600915 You can get another idea from another friend of mine Billy Hollis here: http://geekswithblogs.net/jalexander/archive/2011/04/09/the-eternal-battle-rich-v.-reachhellip--guest-blogger-billy-hollis.aspx The commenter is raving about HTML 5 and how that’s the future and SL is not. Well, my reaction is “hogwash”. Sure, HTML 5 is important and does some interesting stuff. Checkout what Bing.com is doing with it on some days and you can see. But to say that XAML is dead is nuts. I have been wrapping up bugs on a cross browser version of an application for awhile now. Whats the state of cross browser today? Well, better than a few years ago but far from perfect.  Each browser vendor interprets the specs in a little different way and you must account for them. The worst offender for major browsers? Apple and its Safari.  I had to make more changes for it than any other. Whats that got to do with XAML and SL/WPF?  Well, you write your SL code once and it runs in all browsers that support it, no changes. ipad does not? Well, they should be taken to court and forced too just like MS and others have been in the past for locking out competitors. Line of business applications? Write them in SL or WPF or both.  Use the power of XAML witch far out reaches html in any flavor and move on. We do need HTML 5 but its not a panacea nor will it replace all other technologies.

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

  • Guitar hero clone and music

    - by mm24
    I am an indie game developer and I am doing a game similar to guitar hero. I am using tracks composed by musicians and I contacted them to sing a licensing contract. As is my first indie project I have no idea on how I should deal with the royalties aspect of this because each track as an ISRC code and each time a track is played in public there is a fee to pay to the "local" registration authority. An iPhone game it is downloaded and not played on air (or physically distributed), and hence I think there is a specific legislation for this that specifies how much one should pay. I own some of the tracks composed and for this I think I won't have to pay a royalty (even if the track has an ISRC code) but for other tracks I just have a license "to use". I wonder how a game like Guitar hero can sell on iPhone for as little as 0,99$ and then have "in app purchases" for packs of 3 songs (for about 2 dollars) and make a profit (I imagine they will have to pay a ISRC royalty). Does anyone of you have any idea of this can work or if there is a section in this forum where I can ask this question? I understand is not about coding but I think is about development of a game in the broader sense.

    Read the article

  • How to Create Shortcuts to Programs on USB Drives

    - by Lori Kaufman
    If you work on multiple computers, you probably use a USB drive to take your favorite portable software with you. Portable application suites like PortableApps.com, CodySafe, or Lupo PenSuite, each have a main menu providing access to the programs installed into the suite. However, there may be reasons why you need to create shortcuts to programs on your USB drive. You may be using a program that does not integrate into the suite’s main menu. Or, you may not be using an official portable application suite at all, and just placing portable software in a folder on your USB drive. Maybe you prefer using shortcuts on the root of the USB drive, like a portable desktop. Whatever your reason, you can’t just create a shortcut to an application on the USB drive and place it in the root of the drive. The shortcut will always refer to the full path of the application, including the drive letter. Different computers assign different drive letters to USB flash drives, so you would have to change the drive letter for your shortcuts when it changes. You can assign a static drive letter to the USB drive. However, if you would rather not do that, there is a way to create shortcuts to programs on a USB drive using relative paths. Because Windows does not support relative paths in shortcuts, we will show you how to create a “shortcut” on the root of a USB drive by creating a batch (.bat) file and converting it to an executable (.exe) file. What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS How To Be Your Own Personal Clone Army (With a Little Photoshop)

    Read the article

  • Any good reason open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • How to capture a Header or Trailer Count Value in a Flat File and Assign to a Variable

    - by Compudicted
    Recently I had several questions concerning how to process files that carry a header and trailer in them. Typically those files are a product of data extract from non Microsoft products e.g. Oracle database encompassing various tables data where every row starts with an identifier. For example such a file data record could look like: HDR,INTF_01,OUT,TEST,3/9/2011 11:23 B1,121156789,DATA TEST DATA,2011-03-09 10:00:00,Y,TEST 18 10:00:44,2011-07-18 10:00:44,Y B2,TEST DATA,2011-03-18 10:00:44,Y B3,LEG 1 TEST DATA,TRAN TEST,N B4,LEG 2 TEST DATA,TRAN TEST,Y FTR,4,TEST END,3/9/2011 11:27 A developer is normally able to break the records using a Conditional Split Transformation component by employing an expression similar to Output1 -- SUBSTRING(Output1,1,2) == "B1" and so on, but often a verification is required after this step to check if the number of data records read corresponds to the number specified in the trailer record of the file. This portion sometimes stumbles some people so I decided to share what I came up with. As an aside, I want to mention that the approach I use is slightly more portable than some others I saw because I use a separate DFT that can be copied and pasted into a new SSIS package designer surface or re-used within the same package again and it can survive several trailer/footer records (!). See how a ready DFT can look: The first step is to create a Flat File Connection Manager and make sure you get the row split into columns like this: After you are done with the Flat File connection, move onto adding an aggregate which is in use to simply assign a value to a variable (here the aggregate is used to handle the possibility of multiple footers/headers): The next step is adding a Script Transformation as destination that requires very little coding. First, some variable setup: and finally the code: As you can see it is important to place your code into the appropriate routine in the script, otherwise the end result may not be as expected. As the last step you would use the regular Script Component to compare the variable value obtained from the DFT above to a package variable value obtained say via a Row Count component to determine if the file being processed has the right number of rows.

    Read the article

  • Scaling along an arbitrary axis (Dealing with non-uniform scale)

    - by Jon
    I'm trying to build my own little engine to get more familiar with the concepts of 3D programming. I have a transform class that on each frame it creates a Scaling Matrix (S), a Rotation Matrix from a Quaternion (R) and concatenates them together (S*R). Once i have SR, I insert the translation values into the bottom of the three columns. So i end up with a transformation matrix that looks like: [SR SR SR 0] [SR SR SR 0] [SR SR SR 0] [tx ty tz 1] This works perfectly in all cases except when rotating an object that has a non-uniform scale. For example a unit cube with ScaleX = 4, ScaleY = 2, ScaleZ = 1 will give me a rectangular box that is 4 times as wide as the depth and twice as high as the depth. If i then translate this around, the box stays the same and looks normal. The problem happens whenever I try to rotate this scaled box. The shape itself becomes distorted and it appears as though the Scale factors are affecting the object on the World X,Y,Z axis rather than the local X,Y,Z axis of the object. I've done some pretty extensive research through a variety of textbooks (Eberly, Moller/Hoffman, Phar etc) and there isn't a ton there to go off of. Online, most of the answers say to avoid non-uniform scaling which I understand the desire to avoid it, but I'd still like to figure out how to support it. The only thing I can think off is that when constructing a Scale Matrix: [sx 0 0 0] [0 sy 0 0] [0 0 sz 0] [0 0 0 1] This is scaling along the World Axis instead of the object's local Direction, Up and Right vectors or it's local Z, Y, X axis. Does anyone have any tips or ideas on how to handle construction a transformation matrix that allows for non-uniform scaling and rotation? Thanks!

    Read the article

  • How to refactor when all your development is on branches?

    - by Mark
    At my company, all of our development (bug fixes and new features) is done on separate branches. When it's complete, we send it off to QA who tests it on that branch, and when they give us the green light, we merge it into our main branch. This could take anywhere between a day and a year. If we try to squeeze any refactoring in on a branch, we don't know how long it will be "out" for, so it can cause many conflicts when it's merged back in. For example, let's say I want to rename a function because the feature I'm working on is making heavy use of this function, and I found that it's name doesn't really fit its purpose (again, this is just an example). So I go around and find every usage of this function, and rename them all to its new name, and everything works perfectly, so I send it off to QA. Meanwhile, new development is happening, and my renamed function doesn't exist on any of the branches that are being forked off main. When my issue gets merged back in, they're all going to break. Is there any way of dealing with this? It's not like management will ever approve a refactor-only issue so it has to be squeezed in with other work. It can't be developed directly on main because all changes have to go through QA and no one wants to be the jerk that broke main so that he could do a little bit of non-essential refactoring.

    Read the article

  • First Installation

    - by Dj Zia
    I had windows xp on my desktop originally. Yesterday we were able to replace this with Ubuntu 12.04 from a thumb drive. The live CD did not work. I am more familiar with the system they introduced me to in School recently. Linux pretty much has similarity all around so I am finding the differences not as much of an issue. So, there were a few issues with the installation and getting Grub to start working. There is the low graphics issue which brings us to the command line basics. I am a little familiar with linux so it isn't too intimidating; I am really good with step by step instructions in simple order. My question is Ubuntu installed with out any basic driver's or generic header's. It also is not connecting my computer to the internet. My system is older not new. For the time it had above average parts. How do I solve the problem of Getting the header's, making sure the right configurations are set, and where do I get the driver's for my white box desktop to run Ubuntu properly? Thank you

    Read the article

  • How to implement smart card authentication with a .NET Fat client?

    - by John Nevermore
    I know very little about smart card authentication in general so please point out or correct me if anything below doesn't make sense. Lets say i have: A Certificate Authority "X"-s smart card (non-exportable private key) Drivers for that smart card written in C A smart card reader CA-s authentication OCSP web service A requirement to implement user authentication in a .NET fat client application via a smart card, that was given out by the CA "X". I tried searching info on the web but no prevail. What would the steps be ? My first thought was: Set up a web service, that would allow saving of (for example) scores of a ping pong game for each user. Each time someone tries to submit a score via the client application, he can only do so by inserting the smart card into the reader. Then the public key is read from the smart card by native c calls through .NET and sent to my custom web service, which in return uses the CA-s authentication OCSP web service to prove the validity of the public key/public certificate (?). If the public key is okay and valid, encrypt a random sequence of bytes with the public key and send it to the client application. If the client application sends back the correctly decrypted random sequence of bytes along with the score of the ping pong game, then the score is saved in the database for the given user. My question is, is this the correct way to do it ? What else should i know about smart card authentication ?

    Read the article

  • Track sales and commission with third-party tool

    - by Andrew
    I have a clothing website where I link to various clothing retailers. I have reached an agreement with one of the retailers whereby they will pay a commission to us for every sale they make from traffic that was referred by our site. I need a mechanism for tracking how much commission should be paid to us, that involves as little work as possible to implement from their side. We both have Google Analytics. Option 1: They record a goal in their GA account whenever someone makes a purchase on their site. They see how many completed goals are marked as referral traffic from our site and calculate commission accordingly. The problem with this is that the whole process of calculating and paying commission will be manual. They will need to frequently check how many sales were generated by referral traffic from our site, and probably we will have to chase them for commission payments. Also - since we won't have access to their GA data - we will need to trust that they report all sales accurately. Option 2: Sign them up to an affiliate network like Commission Junction or Google's Affiliate Network, and connect to them through this network. The problem with this solution is that it seems too heavyweight; ideally we don't want to ask a retailer to go through the whole sign up process just to deal with us and pay us commission. I am assuming that there must be some lightweight service that tracks the number of sales by one site and pays commission accordingly to the other site, where the sign up and installation procedure is simple and fast.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >