Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 1082/1381 | < Previous Page | 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089  | Next Page >

  • New Features Of WordPress 3.3 You Must Know

    - by Gopinath
    After months of beta testing, WordPress 3.3 version is going to be released at the end of this month. There are several new features packed in the new version and few of them are going to excite WordPress admins. In this post we are going to discuss about the exciting new features. 1. Drag and Drop Media Uploads One of the biggest improvements in this version of WordPress is it’s all new media uploader. Now you can upload multiple files by just dragging & dropping, instantly resize  the images and filter files by their type. The media upload sports a brand new look WordPress adopted the Pupload plugin to power its media uploader component and it’s written by the same team who created the popular TinyMCE editor plugin. 2. Improved Admin Bar(Toolbar) The admin bar or newly called toolbar has got handful of makeovers. The not so much used items like Search box and other elements are removed to make sure that the bar is not clumsy. The user menu and the related options are moved to the right like how we see in Google’s user bar. Also there are few changes to the colour of the bar to make it more eye friendly. 3. Fly out Admin Menus All the left side bar menus of WordPress admin are now sports a fly out menu style to save a click. In the previous versions if you want to access a sub menu on the left side bar, you need to first click on the category and then choose the menu item from the expanded list. Now on just mouse over you will see a flyout of menu items. 4. Adaptive Admin – Layout Auto Adjust To Fit Various Devices If you own an iPad or any other so called tablets then you are going to love this feature. The admin site of WordPress has got a lot more friendly with tablets and smartphones. WordPress now auto adjusts layout to fit the device through which you are accessing the admin site.  Accessing admin dashboard on your tablets is going to be more fun. 5. Other Features Now that we have read the most useful 4 features here is a small list of other features that may interest you Nice Tooltips are displayed where ever possible to help the newbies to understand the usage of admin site Responsive Layouts jQuery 1.7 and jQuery UI 1.8.16 are the power horses of WordPress Performance improvements This article titled,New Features Of WordPress 3.3 You Must Know, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to display image in second layer in Cocos2d

    - by PeterK
    I am very new at Cocos2d and is testing to displaying an image over the "Hello World" text on a second layer and need help to get it work. I guess it is some basic stuff here and appreciate any tips etc. with this. I know that if i put the display-code (myLayer1) in the "init" it work or do the call [self goHere] from the "init" in myLayer1 it works but i want to call the "goHere" directly. I have the following code: HelloWorld.m: #import "HelloWorldLayer.h" #import "myLayer1.h" // HelloWorldLayer implementation @implementation HelloWorldLayer +(CCScene *) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorldLayer *layer = [HelloWorldLayer node]; myLayer1 *layer1 = [myLayer1 node]; // add layer as a child to scene [scene addChild: layer]; [scene addChild: layer1]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { // create and initialize a Label CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; myLayer1 *a1 = [myLayer1 new]; [a1 goHere]; [myLayer1 release]; } return self; } myLayer1.m: #import "myLayer1.h" @implementation myLayer1 -(void)goHere { NSLog(@">>>>goHere<<<<"); CGSize size = [[CCDirector sharedDirector] winSize]; CCSprite *vv = [CCSprite spriteWithFile:@"hand.png"]; vv.position = ccp( size.width /2 , size.height/2 ); [self addChild:vv z:3]; } -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { } return self; } @end

    Read the article

  • How to get lookahead symbol when constructing LR(1) NFA for parser?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.292 in the 2nd edition) about how to construct an LR(1) parser, and I am at the stage of building the initial NFA. What I don't understand is how to get/compute a lookahead symbol. Here is the example from the book, the grammar: S -> E E -> E - T E -> T T -> ( E ) T -> n n is terminal. The "weird" transitions for me are is the sequence: 1) S -> . E eof 2) E -> . E - T eof 3) E -> . E - T - 4) E -> E . - T - 5) E -> E - . T - (Note: In the above table, the state numbers are in front and the lookahead symbol is at the end.) What puzzles me is that transition from (4) to (5) means reading - token, right? So how is it that - is still a lookahead symbol and even more important why is it that eof is no longer a lookahead symbol? After all in an input such as n - n eof there is only one - symbol. My naive thinking tells me (5) should be written as: 5) E -> E - . T - eof And another thing -- n is terminal. Why it is not used at all as a lookahead symbol? I mean -- we expect to see - or (, it is ok, but lack of n means we are sure it won't appear in input? Update: after more reading I am only more confused ;-) I.e. what is really a lookahead? Because I see such state as (p.292, 2nd column, 2nd row): E -> E . - T eof Lookahead says eof but the incoming input says -. Isn't it a contradiction? And it is not only in this book.

    Read the article

  • Are they asking too much of me?

    - by Tesserex
    Or am I just whining? Background: I work for a "startup," which I put in air quotes because the company has been around for 4 years. We have about 40 employees in three offices, 9 here plus some part time. We have a good amount of investment and bring in about 75% of what we spend (so not profitable just yet.) Standard work week is supposed to be about 60 hours, but they justify that as we have to be online when our international (Taiwan and Vietnam) offices are awake. When I started the job 6 months ago, I spent about a month prototyping an iphone app and did really well on my own. They also found out about my facebook applications and how many users they got. Putting 2 and 2 together (and winding up at -7) they realized 1. I'm independent and innovative (because I was able to use stackoverflow to answer my iOS questions instead of bugging my superiors) and 2. I must have an eye for marketing (since my fb apps grew totally organically without me doing any advertising), and assigned me to a project optimizing adwords campaigns. Today I got reviewed, and then chewed out, by our CEO for not totally rocking this project. Now I thought I was doing ok, but the CEO said the project is stagnant and they're expecting more from me. But since it's a startup, they play loose with job roles and I've had plenty of other things to do in the past three months. Every time I ask what's most important, I get conflicting responses depending who I ask, and the end result is that almost everything has equal priority - high. I could go on about how I don't think adwords is worthwhile for us since our profit margin is so slim, and how we should be trying to improve our website first, but that's not the point. I also have explained to the office director (who originally assigned me the project, not the CEO) that I don't actually know anything about marketing, I'm just a decent programmer, but they think my general smarts will prove capable of tackling this challenge. The CEO also clarified that he wants a more technical and algorithmic approach to the problem. So is there something I can do to address this? Combined with my existing and confusing workload, should I be raising an issue? Or should I do the grown up thing and give it my all, asking for help when I need it and hoping for the best? Sorry if this is very rant-ish.

    Read the article

  • fdisk shows overlapping partitions

    - by Campa
    At every boot to start Ubuntu, a partition gets re-mounted more than 1 times, sometimes causing very long boots. Example below: > dmesg ... [ 21.472020] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro ... [ 42.021537] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro,commit=0 ... I suspect there is a problem of overlapping partitions here, regarding sda4 and sda5: > sudo fdisk -l Device Boot Start End Blocks Id System /dev/sda1 63 610469 305203+ de Dell Utility /dev/sda2 612352 32069631 15728640 7 HPFS/NTFS/exFAT /dev/sda3 * 32069632 238979788 103455078+ 7 HPFS/NTFS/exFAT /dev/sda4 238983166 625141759 193079297 5 Extended /dev/sda5 238983168 612630527 186823680 83 Linux /dev/sda6 612632576 625141759 6254592 82 Linux swap / Solaris Further details: > more /etc/fstab ... # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=b33be99b-5c9e-449e-ad48-be608aeff001 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=7c9071cc-b77b-40da-9f80-6b8a9a220cb1 none swap sw and > mount /dev/sda5 on / type ext4 (rw,errors=remount-ro,commit=0) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/piero/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=piero) I am Running Ubuntu Oneiric + LXDE on Dell Studio XPS machine 64-bit, dual booting with Windows 7. A months ago, I resized the Ubuntu partition and maybe I messed up something by doing that. Do you have any idea, why this long booting is happening?

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

  • why are my players drawn top the side of my viewport

    - by Jetbuster
    Following this admittedly brilliant and clean 2d camera class I have a camera on each player, and it works for multiplayer and i've divided the screen into two sections for split screen by giving each camera a viewport. However in the game it looks like this I'm not sure if thats their position relative to the screen or what The relevant gameScreen code, the makePlayers is setup so it could theoretically work for up to 4 players private void makePlayers() { int rowCount = 1; if (NumberOfPlayers > 2) rowCount = 2; players = new Player[NumberOfPlayers]; for (int i = 0; i < players.Length; i++) { int xSize = GameRef.Window.ClientBounds.Width / 2; int ySize = GameRef.Window.ClientBounds.Height / rowCount; int col = i % rowCount; int row = i / rowCount; int xPoint = 0 + xSize * row; int yPoint = 0 + ySize * col; Viewport viewport = new Viewport(xPoint, yPoint, xSize, ySize); Vector2 playerPosition = new Vector2(viewport.TitleSafeArea.X + viewport.TitleSafeArea.Width / 2, viewport.TitleSafeArea.Y + viewport.TitleSafeArea.Height / 2); players[i] = new Player(playerPosition, playerSprites[i], GameRef, viewport); } //players[1].Keyboard = true; } public override void Draw(GameTime gameTime) { base.Draw(gameTime); foreach (Player player in players) { GraphicsDevice.Viewport = player.PlayerCamera.ViewPort; GameRef.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, null, null, null, player.PlayerCamera.Transform); map.Draw(GameRef.spriteBatch); // Draw the Player player.Draw(GameRef.spriteBatch); // Draw UI screen elements GraphicsDevice.Viewport = Viewport; ControlManager.Draw(GameRef.spriteBatch); GameRef.spriteBatch.End(); } } the player's initialize and draw methods are like so internal void Initialize() { this.score = 0; this.angle = (float)(Math.PI * 0 / 180);//Start sprite at it's default rotation int width = utils.scaleInt(picture.Width, imageScale); int height = utils.scaleInt(picture.Height, imageScale); this.hitBox = new HitBox(new Vector2(centerPos.X - width / 2, centerPos.Y - height / 2), width, height, Color.Black, game.Window.ClientBounds); playerCamera.Initialize(); } #region Methods public void Draw(SpriteBatch spriteBatch) { //Console.WriteLine("Hitbox: X({0}),Y({1})", hitBox.Points[0].X, hitBox.Points[0].Y); //Console.WriteLine("Image: X({0}),Y({1})", centerPos.X, centerPos.Y); Vector2 orgin = new Vector2(picture.Width / 2, picture.Height / 2); hitBox.Draw(spriteBatch); utils.DrawCrosshair(spriteBatch, Position, game.Window.ClientBounds, Color.Red); spriteBatch.Draw(picture, Position, null, Color.White, angle, orgin, imageScale, SpriteEffects.None, 0.1f); } as I said I think I'm gonna need to do something with the render position but I'm to entirely sure what or how it would be elegant to say the least

    Read the article

  • Complete Beginner to Game Programming and Unreal Engine 4, Looking For Advice [on hold]

    - by onemic
    I am currently a 2nd year programming student(Just finished my first year so I will be starting my second year in September) and have mainly learned C and C++ in my classes. In terms of what I know of C++, I know about general inheritance, polymorphism, overloading operators, iterators, a little bit about templates(only class and function templates) etc. but not of the more advanced topics like linked lists and other sequential containers(containers in general I guess), enumerations, most of the standard library(other than like strings and vectors), and probably a bunch of other stuff I dont even know about yet. I subscribed to Unreal Engine 4 as I was very intrigued by their Unreal Tournament announcement earlier this month, especially after hearing that UE4 is going completely C++. Of course my end goal in doing this programming program is to eventually go into game/graphics programming. Since it's my summer off, I thought what better way then to actually apply some of my skills to a personal project so I actually have a firmer understanding of C++ past what my professors tell me. My questions are this: What would be the best way to start off making a small personal game in UE4 as a project for the summer? What should I be aiming for, especially for someone that is still learning C++? Should I focus on making a simple 2D game rather than a 3D one to get started? Seeing the Flappy Chicken showcase intrigued me because before I thought the UE engine was pretty much pigeonholed into being for FPS games What should my expectations be going into UE4 and a game engine for the first time?(UE4 will be my first foray into making a game) What can I expect to gain from making things in UE4, in terms of making games and in terms of further fleshing out my knowledge of C++? Would you recommend I start off 100% using C++ for scripting or using the visual blueprints? Since I'm not a designer, how would I be able to add objects and designs to my game? For someone at my level is retaining the UE4 subscription worth it or is it better to cancel and resub when I learn enough about UE4 and C++? Lastly is there anything to be gained in terms of knowledge/insight through me looking at the source code for UE4? I opened it in VS2013, but noticed that most of the files were C# files and not cpp's. Thanks in advance for taking the time to answer.

    Read the article

  • Is my JavaScript/jQuery methodology good? [migrated]

    - by absentx
    I am seeking critique on what has become my normal methodology of writing JavaScript code. I have become heavily reliant on the jQuery library, but I think this has helped me learn the native language better also. Anyway, please critique the following style of JavaScript coding... Buried are a lot of questions of scope; if you could point out the strengths and weaknesses of this style I would appreciate it. var critique ={ start: function(){ globalness = 'GLOBAL-GLOBAL'; //Available to all critique's methods var notglobalness = 'LOCAL-LOCAL'; // Only available to critiques start method //Am I using the "method" teminology properly here?? $('#stuff').on('click','a.closer-target',function(){ $target = $(this); if($target.hasClass('active')){ $target.removeClass('active'); } else{ $target.addClass('active'); critique.madness($target); } }) console.log(notglobalness+': at least I am useful at home'); console.log('note here that: '+notglobalness+' is no longer available after this point, lets continue on:'); critique.madness(notglobalness); }, madness: function($e){ //Do a bunch of awesomeness with $e, //but continue to keep it seperate because you think its best to keep things isolated. //Send to the next function when complete here console.log('Here is globalness, which is still available from the start method of critique!! ' + globalness); console.log('Let us see if the globalness carries on to a new var object!!'); console.log('The locally isolated variable of NOTGLOBALNESS is available here, because it was passed to this method. Let us show it:'+$e); carryOn.start(); } } //end critique var carryOn={ start: function(){ console.log('any chance critique.globalness will work here??? lets see: ' +globalness); console.log('it absolutely does'); } } $(document).ready(critique.start); (I always struggle with which of the Stack Exchange sites is best to post "questions of theory" like this, but I think Programmers is the best, if not, as usual a mod will move it, etc...)

    Read the article

  • What is going on in this SAT/vector projection code?

    - by ssb
    I'm looking at the example XNA SAT collision code presented here: http://www.xnadevelopment.com/tutorials/rotatedrectanglecollisions/rotatedrectanglecollisions.shtml See the following code: private int GenerateScalar(Vector2 theRectangleCorner, Vector2 theAxis) { //Using the formula for Vector projection. Take the corner being passed in //and project it onto the given Axis float aNumerator = (theRectangleCorner.X * theAxis.X) + (theRectangleCorner.Y * theAxis.Y); float aDenominator = (theAxis.X * theAxis.X) + (theAxis.Y * theAxis.Y); float aDivisionResult = aNumerator / aDenominator; Vector2 aCornerProjected = new Vector2(aDivisionResult * theAxis.X, aDivisionResult * theAxis.Y); //Now that we have our projected Vector, calculate a scalar of that projection //that can be used to more easily do comparisons float aScalar = (theAxis.X * aCornerProjected.X) + (theAxis.Y * aCornerProjected.Y); return (int)aScalar; } I think the problems I'm having with this come mostly from translating physics concepts into data structures. For example, earlier in the code there is a calculation of the axes to be used, and these are stored as Vector2, and they are found by subtracting one point from another, however these points are also stored as Vector2s. So are the axes being stored as slopes in a single Vector2? Next, what exactly does the Vector2 produced by the vector projection code represent? That is, I know it represents the projected vector, but as it pertains to a Vector2, what does this represent? A point on a line? Finally, what does the scalar at the end actually represent? It's fine to tell me that you're getting a scalar value of the projected vector, but none of the information I can find online seems to tell me about a scalar of a vector as it's used in this context. I don't see angles or magnitudes with these vectors so I'm a little disoriented when it comes to thinking in terms of physics. If this final scalar calculation is just a dot product, how is that directly applicable to SAT from here on? Is this what I use to calculate maximum/minimum values for overlap? I guess I'm just having trouble figuring out exactly what the dot product is representing in this particular context. Clearly I'm not quite up to date on my elementary physics, but any explanations would be greatly appreciated.

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • Separate Action from Assertion in Unit Tests

    - by DigitalMoss
    Setup Many years ago I took to a style of unit testing that I have come to like a lot. In short, it uses a base class to separate out the Arrangement, Action and Assertion of the test into separate method calls. You do this by defining method calls in [Setup]/[TestInitialize] that will be called before each test run. [Setup] public void Setup() { before_each(); //arrangement because(); //action } This base class usually includes the [TearDown] call as well for when you are using this setup for Integration tests. [TearDown] public void Cleanup() { after_each(); } This often breaks out into a structure where the test classes inherit from a series of Given classes that put together the setup (i.e. GivenFoo : GivenBar : WhenDoingBazz) with the Assertions being one line tests with a descriptive name of what they are covering [Test] public void ThenBuzzSouldBeTrue() { Assert.IsTrue(result.Buzz); } The Problem There are very few tests that wrap around a single action so you end up with lots of classes so recently I have taken to defining the action in a series of methods within the test class itself: [Test] public void ThenBuzzSouldBeTrue() { because_an_action_was_taken(); Assert.IsTrue(result.Buzz); } private void because_an_action_was_taken() { //perform action here } This results in several "action" methods within the test class but allows grouping of similar tests (i.e. class == WhenTestingDifferentWaysToSetBuzz) The Question Does someone else have a better way of separating out the three 'A's of testing? Readability of tests is important to me so I would prefer that, when a test fails, that the very naming structure of the tests communicate what has failed. If someone can read the Inheritance structure of the tests and have a good idea why the test might be failing then I feel it adds a lot of value to the tests (i.e. GivenClient : GivenUser : WhenModifyingUserPermissions : ThenReadAccessShouldBeTrue). I am aware of Acceptance Testing but this is more on a Unit (or series of units) level with boundary layers mocked. EDIT : My question is asking if there is an event or other method for executing a block of code before individual tests (something that could be applied to specific sets of tests without it being applied to all tests within a class like [Setup] currently does. Barring the existence of this event, which I am fairly certain doesn't exist, is there another method for accomplishing the same thing? Using [Setup] for every case presents a problem either way you go. Something like [Action("Category")] (a setup method that applied to specific tests within the class) would be nice but I can't find any way of doing this.

    Read the article

  • Problem trying to lock framerate at 60 FPS

    - by shad0w
    I've written a simple class to limit the framerate of my current project. But it does not work as it should. Here is the code: void FpsCounter::Process() { deltaTime = static_cast<double>(frameTimer.GetMsecs()); waitTime = 1000.0/fpsLimit - deltaTime; frameTimer.Reset(); if(waitTime <= 0) { std::cout << "error, waittime: " << waitTime << std::endl; } else { SDL_Delay(static_cast<Uint32>(waitTime)); } if(deltaTime == 0) { currFps = -1; } else { currFps = 1000/deltaTime; } std::cout << "--Timings--" << std::endl; std::cout << "Delta: \t" << deltaTime << std::endl; std::cout << "Delay: \t" << waitTime << std::endl; std::cout << "FPS: \t" << currFps << std::endl; std::cout << "-- --" << std::endl; } Timer::Timer() { startMsecs = 0; } Timer::~Timer() { // TODO Auto-generated destructor stub } void Timer::Start() { started = true; paused = false; Reset(); } void Timer::Pause() { if(started && !paused) { paused = true; pausedMsecs = SDL_GetTicks() - startMsecs; } } void Timer::Resume() { if(paused) { paused = false; startMsecs = SDL_GetTicks() - pausedMsecs; pausedMsecs = 0; } } int Timer::GetMsecs() { if(started) { if(paused) { return pausedMsecs; } else { return SDL_GetTicks() - startMsecs; } } return 0; } void Timer::Reset() { startMsecs = SDL_GetTicks(); } The "FpsCounter::Process()" Method is called everytime at the end of my gameloop. I've got the problem that the loop is correctly delayed only every second frame, so it runs one frame delayed at 60 FPS and the next without delay at over 1000 fps. I am searching the error quite a while now, but I do not find it. I hope somebody can point me in the right direction.

    Read the article

  • Nails vs Screws (C# List vs Dictionary)

    - by MarkPearl
    General This may sound like a typical noob statement, but I’m finding out in a very real way that just because you have a solution to a problem, doesn’t necessarily mean it is the best solution. This was reiterated to me when a friend of mine suggested I look at using Dictionaries instead of Lists for a particular problem – he was right, I have always just assumed that because lists solved my problem I did not need to look elsewhere. So my new manifesto to counter this ageless problem is as follows… Look for a solution that will logically work Once you have a solution look for possible alternatives Decide why your current solution is the best approach compared to the alternatives If it is.. use it till something better comes along, if it isnt…. change What’s the difference between Lists & Dictionaries Both lists and dictionaries are used to store collections of data. Assume we had the following declarations… var dic = new Dictionary<string, long>(); var lst = new List<long>(); long data;   With a list, you simply add the item to the list and it will add the item to the end of the list. lst.Add(data); With a dictionary, you need to specify some sort of key and the data you want to add so that it can be uniquely identified. dic.Add(uniquekey, data);   Because with a dictionary you now have unique identifier, in the background they provide all sort’s of optimized algorithms to find your associated data. What this means is that if you are wanting to access your data it is a lot faster than a List. So when is it appropriate to use either class? For me, if I can guarantee that each item in my collection will have a unique identifier, then I will use Dictionaries instead of Lists as there is a considerable performance benefit when accessing each data item. If I cannot make this sort of guarantee, then by default I will use a list. I know this is all really basic, and I hope I haven’t missed some fundamental principle… If anyone would like to add their 2 cents, please feel free to do so…

    Read the article

  • Why can't Ubuntu find an ext3 filesystem on my hard-drive?

    - by urig
    This question is related to this question: Not enough components to start the RAID array? I'm trying to retrieve data from a "Western Digital MyBook World Edition (white light)" NAS device. This is basically an embedded Linux box with a 1TB HDD in it formatted in ext3. It stopped booting one day for no apparent reason. I have extracted the HDD from the NAS device and installed it in a desktop machine running Ubuntu 10.10 in the hope of accessing the files on the drive. I have followed instructions in this forum post, intended to mount the drive through Terminal: http://mybookworld.wikidot.com/forum/t-90514/how-to-recover-data-from-wd-my-book-world-edition-nas-device#post-976452 I have identified the partition that I want to mount and recover files from as /dev/sd4 by running "fdisk -l" and getting this: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0001cf00 Device Boot Start End Blocks Id System /dev/sdb1 5 248 1959930 fd Linux raid autodetect /dev/sdb2 249 280 257040 fd Linux raid autodetect /dev/sdb3 281 403 987997+ fd Linux raid autodetect /dev/sdb4 404 121601 973522935 fd Linux raid autodetect// When I try to mount using: "mount -t ext3 /dev/sdb4 /media/xyz" I get the following error: mount: wrong fs type, bad option, bad superblock on /dev/sdb4, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And "dmesg | tail" shows me: [ 15.184757] [drm] Initialized nouveau 0.0.16 20090420 for 0000:01:00.0 on minor 0 [ 15.986859] [drm] nouveau 0000:01:00.0: Allocating FIFO number 1 [ 15.988379] [drm] nouveau 0000:01:00.0: nouveau_channel_alloc: initialised FIFO 1 [ 16.353379] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro,commit=0 [ 16.705944] tg3 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex [ 16.705951] tg3 0000:02:00.0: eth0: Flow control is off for TX and off for RX [ 16.706102] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 19.125673] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro,commit=0 [ 27.600012] eth0: no IPv6 routers present [ 373.478031] EXT3-fs (sdb4): error: can't find ext3 filesystem on dev sdb4. I guess that last line is the punch line :) Why can't it find the ext3 filesystem on my drive? What do I need to do to mount this partition and copy its contents? Does it have anything to do with the drive being part of a RAID Array (see question mentioned above)? Many thanks to any who can help.

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • CodePlex Daily Summary for Wednesday, September 05, 2012

    CodePlex Daily Summary for Wednesday, September 05, 2012Popular ReleasesDesktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.BLACK ORANGE: HPAD TEXT EDITOR 0.9 Beta: HOW TO RUN THE TEXT EDITOR Download the HPAD ARCHIVED FILES which is in .rar format Extract using Winrar Make sure that extracted files are in the same folder Double-Click on HPAD.exe application fileTelerikMvcGridCustomBindingHelper: Version 1.0.15.247-RC2: TelerikMvcGridCustomBindingHelper 1.0.15.247 RC2 Release notes: This is a RC version (hopefully the last one), please test and report any error or problem you encounter. This release is all about performance and fixes Support: "Or" and "Does Not contain" filter options Improved BooleanSubstitutes, Custom Aggregates and expressions-to-queryover Add EntityFramework examples in ExampleWebApplication Many other improvements and fixes Fix invalid cast on CustomAggregates Support for ...ServiceMon - Extensible Real-time, Service Monitoring Utility: ServiceMon Release 0.9.0.44: Auto-uploaded from build serverJavaScript Grid: Release 09-05-2012: Release 09-05-2012xUnit.net Contrib: xunitcontrib-dotCover 0.6.1 (dotCover 2.1 beta): xunitcontrib release 0.6.1 for dotCover 2.1 beta This release provides a test runner plugin for dotCover 2.1 beta, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release adds support for running xUnit.net tests to dotCover 2.1 beta's Visual Studio plugin. PLEASE NOTE: You do NOT need this if you also have ReSharper and the existing 0.6.1 release installed. DotCover will use ReSharper's test runners if available. This release includes th...B INI Sharp Library: B INI Sharp Library v1.0.0.0 Realsed: The frist realsedActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsA Simple Eng-Hindi CMS: A simple English- Hindi dual language content management system for small business/personal websites.Active Social Migrator: This project for managing the Active Social migration tool.ANSI Console User Control: Custom console control for .NET WinformsAutoSPInstallerGUI: GUI Configuration Tool for SPAutoInstaller Codeplex ProjectCode Documentation Checkin Policy: This checkin policy for Visual Studio 2012 checks if c# code is documented the way it's configured in the config of the policy. Code Dojo/Kata - Free Time Coding: Doing some katas of the Coding Dojo page. http://codingdojo.org/cgi-bin/wiki.pl?KataCataloguefjycUnifyShow: fjycUnifyShowHidden Capture (HC): HC is simple and easy utility to hidden and auto capture desktop or active windowHRC Integration Services: Fake SQL Server Integration Services. LOLKooboo CMS Sites Switcher: Kooboo CMS Sites SwitcherMod.CookieDetector: Orchard module for detecting whether cookies are enabledMyCodes: Created!MySQL Statement Monitor: MySQL Statement Monitor is a monitoring tool that monitors SQL statements transferred over the network.NeoModulusPIRandom: The idea with PI Random is to use easy string manipulation and simple math to generate a pseudo random number. Net Core Tech - Medical Record System: This is a Medical Record System ProjectOraPowerShell: PowerShell library for backup and maintenance of a Oracle Database environment under Microsoft Windows 2008PinDNN: PinDNN is a module that imparts Pinterest-like functionality to DotNetNuke sites. This module works with a MongoDB database and uses the built-in social relatioPyrogen Code Generator: PyroGen is a simple code generator accepting C# as the markup language.restMs: wil be deleted soonScript.NET: Script.NET is a script management utility for web forms and MVC, using ScriptJS-like features to link dependencies between scripts.SpringExample-Pagination: Simple Spring example with PaginationXNA and Component Based Design: This project includes code for XNA and Component Based Design

    Read the article

  • Screen brightness dull after upgrade to Ubuntu 14.04

    - by user288426
    After upgrading to Ubuntu 14.04 I found that I could not increase screen brightness. I'm using a Samsung NC110 netbook. Initially the function key to modify brightness did not respond at all. After implementation of the first part of the fix, the key came alive and the brightness bar could be modified. Yet at maximum brightness indicated the screen still remained very dull. The 2nd part of the fix cures that problem, at least for this type of machine. First part of the fix was copied and modified in line with my experience from following post: How to control Brightness Open a terminal (Ctrl + Alt + T). Then type sudo nano /etc/default/grub. It will ask for your password. Type it in. Around the 11th line, there will be something like: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash". Change it to: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=Linux acpi_backlight=vendor" Save the file by Ctrl+O followed by Ctrl+X. Then run sudo update-grub in the terminal. Reboot and see if backlight adjustment works. Then I needed to modify the rc.local file. Therefore read below fix to understand the procedure: problem with adjusting brightness Ubuntu 14.04 In my case I had 2 folders listed under /sys/class/backlight which were: intel_backlight samsung I realized that the samsung folder is governing. I had to modify the check for max brightness to: cat /sys/class/backlight/samsung/max_brightness In my case the max value obtained is 8. Besides putting this into rc.local, I also had to uncomment the first line to get this working. My rc.local under /etc/ now looks as follows: !/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. echo 8 > /sys/class/backlight/samsung/brightness exit 0 Now I can modify brightness on my netbook and also can get the screen up to its maximum brightness. Hope this is helpful.

    Read the article

  • How to get around the Circular Reference issue with JSON and Entity

    - by DanScan
    I have been experimenting with creating a website that leverages MVC with JSON for my presentation layer and Entity framework for data model/database. My Issue comes into play with serializing my Model objects into JSON. I am using the code first method to create my database. When doing the code first method a one to many relationship (parent/child) requires the child to have a reference back to the parent. (Example code my be a typo but you get the picture) class parent { public List<child> Children{get;set;} public int Id{get;set;} } class child { public int ParentId{get;set;} [ForeignKey("ParentId")] public parent MyParent{get;set;} public string name{get;set;} } When returning a "parent" object via a JsonResult a circular reference error is thrown because "child" has a property of class parent. I have tried the ScriptIgnore attribute but I lose the ability to look at the child objects. I will need to display information in a parent child view at some point. I have tried to make base classes for both parent and child that do not have a circular reference. Unfortunately when I attempt to send the baseParent and baseChild these are read by the JSON Parser as their derived classes (I am pretty sure this concept is escaping me). Base.baseParent basep = (Base.baseParent)parent; return Json(basep, JsonRequestBehavior.AllowGet); The one solution I have come up with is to create "View" Models. I create simple versions of the database models that do not include the reference to the parent class. These view models each have method to return the Database Version and a constructor that takes the database model as a parameter (viewmodel.name = databasemodel.name). This method seems forced although it works. NOTE:I am posting here because I think this is more discussion worthy. I could leverage a different design pattern to over come this issue or it could be as simple as using a different attribute on my model. In my searching I have not seen a good method to overcome this problem. My end goal would be to have a nice MVC application that heavily leverages JSON for communicating with the server and displaying data. While maintaining a consistant model across layers (or as best as I can come up with).

    Read the article

  • PASS: FY10 Actuals Posted

    - by Bill Graziano
    Earlier this year we published preliminary fiscal year 2010 financials to the Governance page on the PASS web site.  Please remember that FY10 runs from July 1st, 2009 through June 30th, 2010 and includes the November 2009 Summit.  We do our fiscal year this way so that the Summit falls earlier in the fiscal year.  The financials we had posted were P&L numbers at the portfolio level.  Prior to this we had posted our detailed budget but only posted the auditors report at the end of each year.  Today we updated our published financials to include: Pre-audit actuals from FY10 at the same level as our budget.  The document has both actuals and budget for FY10 side by side.  This is over 20 pages of detailed financial information covering hundreds of line-items. A letter describing key differences between our budget and actuals.  I walked through each line item where the difference was greater than $25,000 and explained what happened and why. We updated the financial graph going back to 2003 to include FY10. This update should “close the loop” on our financials.  You can now start with the published budget and compare it to the finished financials at the same level of detail.  We also plan to publish the auditor’s report when that is completed -- as we do every year. Overall I’m very happy with how FY10 turned out.  Keep in mind that this was the November 2009 Summit so we were still facing economic challenges.  With all that we were roughly break-even showing a $15,000 profit on $3.9 million of revenue.  I didn’t find anything shocking in reviewing our actual vs. budget but there were a few things that needed explanation.  You can see those in the letter on the governance page. Please keep in mind that these are the actuals from our operating financials.  The auditor may have us make adjustments for depreciation or other financial transactions.  We may also account for certain transactions differently for tax purposes than we do for financial reporting purposes.  I feel these financial statements give you the clearest picture of how our organization spends its money. We were late publishing these this year.  We were working through some tax issues and that delayed our ability to file our final tax forms which delayed this process.  In hindsight I should have published these documents as soon as we had them and not waited for the tax issues.  We’ll do this better in the future. And on a final note, you don’t need to login to view these documents.  If you have any questions you can post them here.  If we get more than a few questions we may see about creating some forums for financial issues on the PASS web site.

    Read the article

  • How can we unify business goals and technical goals?

    - by BAM
    Some background I work at a small startup: 4 devs, 1 designer, and 2 non-technical co-founders, one who provides funding, and the other who handles day-to-day management and sales. Our company produces mobile apps for target industries, and we've gotten a lot of lucky breaks lately. The outlook is good, and we're confident we can make this thing work. One reason is our product development team. Everyone on the team is passionate, driven, and has a great sense of what makes an awesome product. As a result, we've built some beautiful applications that we're all proud of. The other reason is the co-founders. Both have a brilliant business sense (one actually founded a multi-million dollar company already), and they have close ties in many of the industries we're trying to penetrate. Consequently, they've brought in some great business and continue to keep jobs in the pipeline. The problem The problem we can't seem to shake is how to bring these two awesome advantages together. On the business side, there is a huge pressure to deliver as fast as possible as much as possible, whereas on the development side there is pressure to take your time, come up with the right solution, and pay attention to all the details. Lately these two sides have been butting heads a lot. Developers are demanding quality while managers are demanding quantity. How can we handle this? Both sides are correct. We can't survive as a company if we build terrible applications, but we also can't survive if we don't sell enough. So how should we go about making compromises? Things we've done with little or no success: Work more (well, it did result in better quality and faster delivery, but the dev team has never been more stressed out before) Charge more (as a startup, we don't yet have the credibility to justify higher prices, so no one is willing to pay) Extend deadlines (if we charge the same, but take longer, we'll end up losing money) Things we've done with some success: Sacrifice pay to cut costs (everyone, from devs to management, is paid less than they could be making elsewhere. In return, however, we all have creative input and more flexibility and freedom, a typical startup trade off) Standardize project management (we recently started adhering to agile/scrum principles so we can base deadlines on actual velocity, not just arbitrary guesses) Hire more people (we used to have 2 developers and no designers, which really limited our bandwidth. However, as a startup we can only afford to hire a few extra people.) Is there anything we're missing or doing wrong? How is this handled at successful companies? Thanks in advance for any feedback :)

    Read the article

  • How to cleanly add after-the-fact commits from the same feature into git tree

    - by Dennis
    I am one of two developers on a system. I make most of the commits at this time period. My current git workflow is as such: there is master branch only (no develop/release) I make a new branch when I want to do a feature, do lots of commits, and then when I'm done, I merge that branch back into master, and usually push it to remote. ...except, I am usually not done. I often come back to alter one thing or another and every time I think it is done, but it can be 3-4 commits before I am really done and move onto something else. Problem The problem I have now is that .. my feature branch tree is merged and pushed into master and remote master, and then I realize that I am not really done with that feature, as in I have finishing touches I want to add, where finishing touches may be cosmetic only, or may be significant, but they still belong to that one feature I just worked on. What I do now Currently, when I have extra after-the-fact commits like this, I solve this problem by rolling back my merge, and re-merging my feature branch into master with my new commits, and I do that so that git tree looks clean. One clean feature branch branched out of master and merged back into it. I then push --force my changes to origin, since my origin doesn't see much traffic at the moment, so I can almost count that things will be safe, or I can even talk to other dev if I have to coordinate. But I know it is not a good way to do this in general, as it rewrites what others may have already pulled, causing potential issues. And it did happen even with my dev, where git had to do an extra weird merge when our trees diverged. Other ways to solve this which I deem to be not so great Next best way is to just make those extra commits to the master branch directly, be it fast-forward merge, or not. It doesn't make the tree look as pretty as in my current way I'm solving this, but then it's not rewriting history. Yet another way is to wait. Maybe wait 24 hours and not push things to origin. That way I can rewrite things as I see fit. The con of this approach is time wasted waiting, when people may be waiting for a fix now. Yet another way is to make a "new" feature branch every time I realize I need to fix something extra. I may end up with things like feature-branch feature-branch-html-fix, feature-branch-checkbox-fix, and so on, kind of polluting the git tree somewhat. Is there a way to manage what I am trying to do without the drawbacks I described? I'm going for clean-looking history here, but maybe I need to drop this goal, if technically it is not a possibility.

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • How can I cleanly and elegantly handle data and dependancies between classes

    - by Neophyte
    I'm working on 2d topdown game in SFML 2, and need to find an elegant way in which everything will work and fit together. Allow me to explain. I have a number of classes that inherit from an abstract base that provides a draw method and an update method to all the classes. In the game loop, I call update and then draw on each class, I imagine this is a pretty common approach. I have classes for tiles, collisions, the player and a resource manager that contains all the tiles/images/textures. Due to the way input works in SFML I decided to have each class handle input (if required) in its update call. Up until now I have been passing in dependencies as needed, for example, in the player class when a movement key is pressed, I call a method on the collision class to check if the position the player wants to move to will be a collision, and only move the player if there is no collision. This works fine for the most part, but I believe it can be done better, I'm just not sure how. I now have more complex things I need to implement, eg: a player is able to walk up to an object on the ground, press a key to pick it up/loot it and it will then show up in inventory. This means that a few things need to happen: Check if the player is in range of a lootable item on keypress, else do not proceed. Find the item. Update the sprite texture on the item from its default texture to a "looted" texture. Update the collision for the item: it might have changed shape or been removed completely. Inventory needs to be updated with the added item. How do I make everything communicate? With my current system I will end up with my classes going out of scope, and method calls to each other all over the place. I could tie up all the classes in one big manager and give each one a reference to the parent manager class, but this seems only slightly better. Any help/advice would be greatly appreciated! If anything is unclear, I'm happy to expand on things.

    Read the article

  • Using all Ten IO slots on a 7420

    - by user12620172
    So I had the opportunity recently to actually use up all ten slots in a clustered 7420 system. This actually uses 20 slots, or 22 if you count the clusteron card. I thought it was interesting enough to share here. This is at one of my clients here in southern California. You can see the picture below. We have four SAS HBAs instead of the usual two. This is becuase we wanted to split up the back-end taffic for different workloads. We have a set of disk trays coming from two SAS cards for nothing but Exadata backups. Then, we have a different set of disk trays coming off of the other two SAS cards for non-Exadata workloads, such as regular user file storage. We have 2 Infiniband cards which allow us to do a full mesh directly into the back of the nearby, production Exadata, specifically for fast backups and restores over IB. You can see a 3rd IB card here, which is going to be connected to a non-production Exadata for slower backups and restores from it.The 10Gig card is for client connectivity, allowing other, non-Exadata Oracle databases to make use of the many snapshots and clones that can now be created using the RMAN copies from the original production database coming off the Exadata. This allows for a good number of test and development Oracle databases to use these clones without effecting performance of the Exadata at all.We also have a couple FC HBAs, both for NDMP backups to an Oracle/StorageTek tape library and also for FC clients to come in and use some storage on the 7420.  Now, if you are adding more cards to your 7420, be aware of which cards you can place in which slots. See the bottom graphic just below the photo.  Note that the slots are numbered 0-4 for the first 5 cards, then the "C" slots which is the dedicated Cluster card (called the Clustron), and then another 5 slots numbered 5-9. Some rules for the slots: Slots 1 & 8 are automatically populated with the two default SAS cards. The only other slots you can add SAS cards to are 2 & 7. Slots 0 and 9 can only hold FC cards. Nothing else. So if you have four SAS cards, you are now down to only four more slots for your 10Gig and IB cards. Be sure not to waste one of these slots on a FC card, which can go into 0 or 9, instead.  If at all possible, slots should be populated in this order: 9, 0, 7, 2, 6, 3, 5, 4

    Read the article

< Previous Page | 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089  | Next Page >