Search Results

Search found 8692 results on 348 pages for 'per magnusson'.

Page 58/348 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Computing pixel's screen position in a vertex shader: right or wrong?

    - by cubrman
    I am building a deferred rendering engine and I have a question. The article I took the sample code from suggested computing screen position of the pixel as follows: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition = output.Position; } PixelShaderFunction() { input.ScreenPosition.xy /= input.ScreenPosition.w; float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } The question is what if I compute the position in the vertex shader (which should optimize the performance as VSF is launched significantly less number of times than PSF) would I get the per-vertex lighting insted. Here is how I want to do this: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition.xy = output.Position / output.Position.w; } PixelShaderFunction() { float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } What exactly happens with the data I pass from VS to PS? How exactly is it interpolated? Will it give me the right per-pixel result in this case? I tried launching the game both ways and saw no visual difference. Is my assumption right? Thanks. P.S. I am optimizing the point light shader, so I actually pass a sphere geometry into the VS.

    Read the article

  • ASP.NET hosting: better, faster, cheaper

    - by Fabrice Marguerie
    After seven years with webhost4life, it was time to move on. Especially because of all the troubles with webhost4life due to their internal migration to a new hosting environment (the company has been bought out).I've just moved all my websites elsewhere. I'm now using Arvixe and OrcsWeb.I use OrcsWeb for metaSapiens.com. OrcsWeb kindly offers me free ASP.NET hosting because I'm a Microsoft MVP. I'd like to publicly thank OrcsWeb for this, and I invite you to have a look at what they have to offer.I use Arvixe for all my other websites, the major ones being SharpToolbox.com, JavaToolbox.com, AxToolbox.com, Proagora.com, LinqInAction.net, ClairDeBulle.com, and madgeek.com.Moving all my websites wasn't a walk in the park, but it was well worth it. Let's consider what I get with Arvixe:Unlimited diskspaceUnlimited data transferUnlimited domainsDedicated application poolsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL 5 databases.NET 1.1, 2, 3.5 and 4Full trustIIS 7Daily backups A powerful and easy to use control panelAnd more!All of this for $8 per month. If you don't need all of the above features, you can even get an offer as cheap as $5 per month.You can even get better rates if you use coupon codes, such as TOPHOST (30% discount) or MVCHOSTING (20% discount).All in all, I paid only $134 for two years for a great hosting service!Maybe it's time for you to move too?Disclaimer: the links to OrcsWeb and Arvixe are affiliate links that may bring me some money home if you sign up.

    Read the article

  • Case Study: Polystar Improves Telecom Networks Performance with Embedded MySQL

    - by Bertrand Matthelié
    Polystar delivers and supports systems that increase the quality, revenue and customer satisfaction of telecommunication services. Headquarted in Sweden, Polystar helps operators worldwide including Telia, Tele2, Telekom Malysia and T-Mobile to monitor their network performance and improve service levels. Challenges Deliver complete turnkey solutions to customers integrating a database ensuring high performance at scale, while being very easy to use, manage and optimize. Enable the implementation of distributed architectures including one database per server while maintaining a low Total Cost of Ownership (TCO). Avoid growing database complexity as the volume of mobile data to monitor and analyze drastically increases. Solution Evaluation of several databases and selection of MySQL based on its high performance, manageability, and low TCO. The MySQL databases implemented within the Polystar solutions handle on average 3,000 to 5,000 transactions per second. Up to 50 million records are inserted every day in each database. Typical installations include between 50 and 100 MySQL databases, up to 300 for the largest ones. Data is then periodically aggregated, with the original records being overwritten, as the need for detailed information becomes unnecessary to operators after a few weeks. The exponential growth in mobile data traffic driven by the proliferation of smartphones and usage of social media requires ever more powerful solutions to monitor, analyze and turn network data into actionable business intelligence. With MySQL, Polystar can deliver powerful, yet easy to manage, solutions to its customers. MySQL-based Polystar solutions enable operators to monitor, manage and improve the service levels of their telecom networks in over a dozen countries from a single location. The new and innovative MySQL features constantly delivered by Oracle help ensure Polystar that it will be able to meet its customer’s needs as they evolve. “MySQL has been a great embedded database choice for us. It delivers the high performance we need while remaining very easy to use, manage and tune. Power and simplicity at its best.” Mats Söderlindh, COO at Polystar.

    Read the article

  • Traffic estimation for a multiplayer flash game

    - by Steve Addington
    hey, i want to know if my rough traffic estimations are right, it would be for a pretty simple realtime flashgame in the style of haxball (but not as a soccer game) heres a video of it http://www.youtube.com/watch?v=z_xBdFg1RcI So here comes my estimation, i dont know if they are realistic! i hope someone can help me. consider the packet attached as a typical one sent every 200ms, its 148bytes + 64 bytes of header will make around a 200bytes packet. The server will receive 200bytes x 6 players x 5 times a sec=6000bytes/s=5.85Kbytes/s=46.9kbit/s plus he has to send all back to the players, so at this point are 94Kbit/s.The server received all the information, perform the definitive calculation and send the new position to all players, in a bigger packet of around 900bytes that have to be delivered to the others 6, which makes 900bytes x 6 players x 5 times a sec=27000bytes/s=26Kbytes/s=210kbit/s. overall that would be 26kbyte per second. thats like 130mb traffic per hour for a 6player room. but somehow i think the numbers are too high? that would be really much traffic for such a simple game. did i calculate something wrong?

    Read the article

  • C# XNA: Effecient mesh building algorithm for voxel based terrain ("top" outside layer only, non-destructible)

    - by Tim Hatch
    To put this bluntly, for non-destructible/non-constructible voxel style terrain, are generated meshes handled much better than instancing? Is there another method to achieve millions of visible quad faces per scene with ease? If generated meshes per chunk is the way to go, what kind of algorithm might I want to use based on only EVER needing the outer layer rendered? I'm using 3D Perlin Noise for terrain generation (for overhangs/caves/etc). The layout is fantastic, but even for around 20k visible faces, it's quite slow using instancing (whether it's one big draw call or multiple smaller chunks). I've simplified it to the point of removing non-visible cubes and only having the top faces of my cube-like terrain be rendered, but with 20k quad instances, it's still pretty sluggish (30fps on my machine). My goal is for the world to be made using quite small cubes. Where multiple games (IE: Minecraft) have the player 1x1 cube in width/length and 2 high, I'm shooting for 6x6 width/length and 9 high. With a lot of advantages as far as gameplay goes, it also means I could quite easily have a single scene with millions of truly visible quads. So, I have been trying to look into changing my method from instancing to mesh generation on a chunk by chunk basis. Do video cards handle this type of processing better than separate quads/cubes through instancing? What kind of existing algorithms should I be looking into? I've seen references to marching cubes a few times now, but I haven't spent much time investigating it since I don't know if it's the better route for my situation or not. I'm also starting to doubt my need of using 3D Perlin noise for terrain generation since I won't want the kind of depth it would seem best at. I just like the idea of overhangs and occasional cave-like structures, but could find no better 'surface only' algorithms to cover that. If anyone has any better suggestions there, feel free to throw them at me too. Thanks, Mythics

    Read the article

  • Hiding the Flash Message After a Time Delay

    - by Madhan ayyasamy
    Hi Friends,The flash hash is a great way to provide feedback to your users.Here is a quick tip for hiding the flash message after a period of time if you don’t want to leave it lingering around.First, add this line to the head of your layout to ensure the prototype and script.aculo.us javascript libraries are loaded:Next, add the following to either your layout (recommended), your view templates or a partial depending on your needs. I usually add this to a partial and include the partial in my layouts. "flash", :id = flash_type % "text/javascript" do % setTimeout("new Effect.Fade('');", 10000); This will wrap the flash message in a div with class=‘flash’ and id=‘error’, ‘notice’ or ‘warn’ depending on the flash key specified.The value ‘10000’ is the time in milliseconds before the flash will disappear. In this case, 10 seconds.This function looks pretty good and little javascript stunts like this can help make your site feel more professional. It’s also worth bearing in mind though, not everybody can see well or read as quickly as others so this may not be suitable for every application.Update:As Mitchell has pointed out (see comments below), it may be better to set the flash_type as the div class rather than it’s id. If there is the possibility that you’ll be showing more than one flash message per page, setting the flash_type as the div id will result in your HTML/XHTML code becoming invalid because the unique intentifier will be used more than once per page.Here is a slightly more complex version of the method shown above that will hide all divs with class ‘flash’ after a time delay, achieving the same effect and also ensuring your code stays valid with more than one flash message! "flash #{flash_type}" % "text/javascript" do % setTimeout("$$('div.flash').each(function(flash){ flash.hide();})", 10000); In this example, the div id is not set at all. Instead, each flash div will have class “div” and also class of the type of flash message (“error”, “warning” etc.).Have a Great Day..:)

    Read the article

  • How do you prefer to handle image spriting in your web projects?

    - by Macy Abbey
    It seems like these days it is pretty much mandatory for web applications to sprite images if they want many images on their site AND a fast load time. (Spriting is the process of combining all images referenced from a style sheet into one/few image(s) with each reference containing a different background position.) I was wondering what method of implementing sprites you all prefer in your web applications, given that we are referring to non-dynamic images which are included/designed by the programming team and not images which are dynamically uploaded by a third party. 1. Add new images to an existing sprite by hand, create new css reference by hand. 2. Generate a sprite server-side from your css files which all reference single images set to be background images of an html element that is the same size of the image you are spriting once per build and update all css references programmatically. 3. Use a sprite generating program to generate a sprite image for you once per release and hand insert the new css class / image into your project. 4. Other methods? I prefer two as it requires very little hand-coding and image editing.

    Read the article

  • How to Handle frame rates and synchronizing screen repaints

    - by David Kroukamp
    I would first off say sorry if the title is worded incorrectly. Okay now let me give the scenario I'm creating a 2 player fighting game, An average battle will include a Map (moving/still) and 2 characters (which are rendered by redrawing a varying amount of sprites one after the other). Now at the moment I have a single game loop limiting me to a set number of frames per second (using Java): Timer timer = new Timer(0, new AbstractAction() { @Override public void actionPerformed(ActionEvent e) { long beginTime; //The time when the cycle begun long timeDiff; //The time it took for the cycle to execute int sleepTime; //ms to sleep (< 0 if we're behind) int fps = 1000 / 40; beginTime = System.nanoTime() / 1000000; //execute loop to update check collisions and draw gameLoop(); //Calculate how long did the cycle take timeDiff = System.nanoTime() / 1000000 - beginTime; //Calculate sleep time sleepTime = fps - (int) (timeDiff); if (sleepTime > 0) {//If sleepTime > 0 we're OK ((Timer)e.getSource()).setDelay(sleepTime); } } }); timer.start(); in gameLoop() characters are drawn to the screen ( a character holds an array of images which consists of their current sprites) every gameLoop() call will change the characters current sprite to the next and loop if the end is reached. But as you can imagine if a sprite is only 3 images in length than calling gameLoop() 40 times will cause the characters movement to be drawn 40/3=13 times. This causes a few minor anomilies in the sprited for some charcters So my question is how would I go about delivering a set amount of frames per second in when I have 2 characters on screen with varying amount of sprites?

    Read the article

  • How can I better implement A star algorithm with a very large set of nodes?

    - by Stephen
    I'm making a game with nodejs in which many enemies must converge on the player as the player moves around a relatively open space (right now it is an open field with few obstacles, but eventually there may be some small buildings in the field with 1 or 2 rooms). It's a multiplayer game using websockets, so the server needs to keep track of enemies and players. I found this javascript A* library which I've modified to be used on the server as a nodejs module. The library utilizes a Binary Heap to track the nodes for the algorithm, so it should be pretty fast (and indeed, with a small grid, say 100x100 it is lightning fast). The problem is that my game is not really tile-based. As the player moves around the map, he is moving on a more or less 1-to-1 per-pixel coordinate system (the player can move in 8 directions, 1 or 2 pixels at a time). In preliminary tests, on an 800x600 field, the path-finding can take anywhere from 400 to 1000 ms. Multiply that by 10 enemies and the game starts to get pretty choppy. I have already set it up so that each enemy will only do a path-finding call once per second or even as slow as once every 2 seconds (they have to keep updating their path because the players can move freely). But even with this long interval, there are noticeable lag spikes or chops every couple of seconds as the enemies update their paths. I'm willing to approach the problem of path-finding differently, if there's another option. I'm assuming that the real problem is the enormous grid (800x600). It also occurs to me that maybe the large arrays are to blame, as I've read that V8 has trouble with large arrays.

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • How to ...set up new Java environment - largely interfaces...

    - by Chris Kimpton
    Hi, Looks like I need to setup a new Java environment for some interfaces we need to build. Say our system is X and we need to interfaces to systems A, B and C. Then we will be writing interfaces X-A, X-B, X-C. Our system has a bus within it, so the publishing on our side will be to the bus and the interface processes will be taking from the bus and mapping to the destination system. Its for a vendor based system - so most of the core code we can't touch. Currently thinking we will have several processes, one per interface we need to do. The question is how to structure things. Several of the APIs we need to work with are Java based. We could go EJB, but prefer to keep it simple, one process per interface, so that we can restart them individually. Similarly SOA seems overkill, although I am probably mixing my thoughts about implementations of it compared to the concepts behind it... Currently thinking that something Spring based is the way to go. In true, "leverage a new tech if possible"-style, I am thinking maybe we can shoe horn some jruby into this, perhaps to make the APIs more readable, perhaps event-machine-like and to make the interface code more business-friendly, perhaps even storing the mapping code in the DB, as ruby snippets that get mixed in... but thats an aside... So, any comments/thoughts on the Spring approach - anything more up-to-date/relevant these days. EDIT: Looking a JRuby further, I am tempted to write it fully in JRuby... in which case do we need any frameworks at all, perhaps some gems to make things clearer... Thanks in advance, Chris

    Read the article

  • A bounce-rate attack to manipulate SEO ?

    - by Denis Volovik
    This is a question to experienced people that might help us shed some light on the issue. We noticed a very strange behavior on our site, in Google Analytics. Some dude from Finland, namely, from Kouvola city is hitting one of our pages - only one page on our site, 'bout a hundred times per day, all with an average bounce rate of 90%+... This is causing our overall bounce rate to go up by 1 to 3% per day... which is very disturbing.. since we're trying to do our best in order to keep it as low as possible. And obviously having it jumped from ~24% to 27%, just because of that crazy dude is not making us happy at all... We tried implementing a geo-targeted script in order to catch this particular visitor and deliver him a juicy message, and it seemed like it helped in the beginning, it has stopped for a day or two, but now he's back... The geo-targeted script was also logging all IP addresses for page requests originating from Finland in order to find out more details and (in order to block them on the server level, later).. but thing is, it was all mainly cable or DSL connections with various, but not constantly repeating IPs... we are all wondering what is he up to really ? I think that this page should be kept updated with ideas on how to combat this and perhaps someone could also shed light on what it might be ? What is the reason for doing this "bounce-rate attack", as I call it? There was a similar question asked on stackoverflow earlier, with no meaningful answer - here - How to stop bounce rate manipulation.

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • How can I make PDFs appear life-size when displayed at 100%?

    - by ændrük
    When I open a letter size PDF and then zoom to 100%, the page physically displayed on my monitor is smaller than a real letter size sheet of paper. How can I make "100%" on the computer screen correspond with "100%" in real life? Details This message suggests that I should be investigating the system-wide DPI settings for my monitor. xdpyinfo reports: dimensions: 1024x768 pixels (271x203 millimeters) resolution: 96x96 dots per inch My monitor has a native display resolution of 1024x768 pixels and a diagonal display size of 12.07 inches. PX CALC returns the following information: DPI: 106.05 Dot Pitch: 0.2395mm Size: 9.66" × 7.24" (24.53cm × 18.39cm) What I've tried so far Running xrandr --dpi 106.05 successfully caused my PDF to appear actual size at 100%, but this effect was lost after rebooting. To make the setting persistent I tried creating the following /etc/X11/xorg.conf: Section "Monitor" Identifier "ThinkPad X60 LCD" DisplaySize 245 183 EndSection Section "Screen" Identifier "Screen0" Monitor "ThinkPad X60 LCD" EndSection After re-logging in, /var/log/Xorg.0.log contained [ 1167.824] (**) intel(0): Display dimensions: (245, 183) mm [ 1167.824] (**) intel(0): DPI set to (106, 106) but xdpyinfo still reported dimensions: 1024x768 pixels (271x203 millimeters) resolution: 96x96 dots per inch and "100%" still appeared too small. Link to XRANDR wiki Link to making XRANDR changes persistant

    Read the article

  • Encrypted home breaks on login

    - by berkes
    My home is encrypted, which breaks the login. Gnome and other services try to find all sorts of .files, write to them, read from them and so on. E.g. .ICEauthority. They are not found (yet) because at that moment the home is still encrypted. I do not have automatic login set, since that has known issues with encrypted home in Ubuntu. When I go trough the following steps, there is no problem: boot up the system. [ctr][alt][F1], login. run ecryptfs-mount-private [ctr][alt][F7], done. Can now login. I may have some setting wrong, but have no idea where. I suspect ecryptfs-mount-private should be ran earlier in bootstrap, but do not know how to make it so. Some issues that may cause trouble: I have a fingerprint reader, it works for login and PAM. I have three keyrings in seahorse, containing passwords from old machines (backups). Not just one. Suggestion was that the PAM settings are wrong, so here are the relevant parts from /etc/pam.d/common-auth. # here are the per-package modules (the "Primary" block) auth [success=3 default=ignore] pam_fprintd.so auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config I am not sure about how this configuration works, but ut seems that maybe the*optional* in auth optional pam_ecryptfs.so unwrap is causing the ecryptfs to be ignored?

    Read the article

  • How do developers verify that software requirement changes in one system do not violate a requirement of downstream software systems?

    - by Peter Smith
    In my work, I do requirements gathering, analysis and design of business solutions in addition to coding. There are multiple software systems and packages, and developers are expected to work on any of them, instead of being assigned to make changes to only 1 system or just a few systems. How developers ensure they have captured all of the necessary requirements and resolved any conflicting requirements? An example of this type of scenario: Bob the developer is asked to modify the problem ticket system for a hypothetical utility repair business. They contract with a local utility company to provide this service. The old system provides a mechanism for an external customer to create a ticket indicating a problem with utility service at a particular address. There is a scheduling system and an invoicing system that is dependent on this data. Bob's new project is to modify the ticket placement system to allow for multiple addresses to entered by a landlord or other end customer with multiple properties. The invoicing system bills per ticket, but should be modified to bill per address. What practices would help Bob discover that the invoicing system needs to be changed as well? How might Bob discover what other systems in his company might need to be changed in order to support the new changes\business model? Let's say there is a documented specification for each system involved, but there are many systems and Bob is not familiar with all of them. End of example. We're often in this scenario, and we do have design reviews but management places ultimate responsibility for any defects (business process or software process) on the developer who is doing the design and the work. Some organizations seem to be better at this than others. How do they manage to detect and solve conflicting or incomplete requirements across software systems? We currently have a lot of tribal knowledge and just a few developers who understand the entire business and software chain. This seems highly ineffective and leads to problems at the requirements level.

    Read the article

  • Email provider - suggestions needed

    - by Christian Fazzini
    We are looking for a good way to have email support. In theory, we need to allow end-users to send emails directly to support and careers. i.e. support@domain_name_here.com and careers@domain_name_here.com. Second, we need to provide emails to our staff. So each staff member has their own email address. i.e. joe@domain_name_here.com, meghan@domain_name_here.com, etc. Google Apps is one that we are considering. However, they are charging $50 per user, per year. Not so bad, considering the quality and the features they offer. However, there are also cheaper alternatives. i.e. my domain registrar offers an email plan for $20 / year / 10 emails. Go Daddy has a number of plans and still a lot more affordable than Google Apps. So far Namecheap and Go Daddy are the only ones I have looked at for email plans. Is it worth signing up with Google Apps? Or are there better alternatives? Your thoughts?

    Read the article

  • Adsense click bot is click bombing my site

    - by Graham
    I have a site that get's roughly 7,000 - 10,000 page views per day right now. Starting around 1 AM on 7/1/12 I noticed the CTR was rising dramatically. These clicks would be credited then de-credited soon after. So, they were obviously fraudulent clicks. The next day I had about 200 clicks in account with about 100 of them being fraudulent. It's about 3 - 8 per hour evenly dispersed for each of the three ads 24 hours a day. This leads me to believe that it's some sort of Adsense click bot. Also, I removed the ads last evening then put them back up around 3AM and the invalid clicks started within 10 minutes. I signed up for statcounter.com to analyze the exit links on the Adsense. Then I conditionally blocked ads for the IP address of the person / bot I suspected doing this. But, I think that the bot has several proxies to choose from and can refresh IP addresses. I've notified Google through the invalid click form / email 4 times over the past two days in order to let them know I'm aware of the situation and am working on a solution. I've also temporally removed all ads on that site. How can I block a bot like this? Thank you.

    Read the article

  • Which would be a better way to load data via ajax

    - by Mike
    I am using google maps and returning html/lat/long from my MySQL database Currently A user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the Controller then queries the db, and returns the following data via JSON Lat/Long of the marker HTML for the popup window this is approximately 34 rows in the database across two tables per business the ajax call receives this data and then plots the marker along with the html onto the map The data that is returned from the controller is one big json object... This is done for all businesses that exist in the Video Production category (currently approx 40 businesses). As you can see, pulling this data for multiple categories (100s of businesses) can get very very taxing on the server. My question is Would it be more beneficial to modify the process flow as such: a user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the controller then queries the database for the location base information lat/long level (used to change marker icon color) This would be a single row per business with several columns the ajax call receives this data and then plots the marker on the map when the user clicks a marker an ajax call is sent to a CodeIgniter Controller the controller queries the database for the HTML and additional data based on business_id and if not, what are some better suggestions to this problem? In summary this means rather than including the HTML and additional data along for each business, only submitting minimal location information and then re-query for that information when each business marker is clicked. Potential Downsides longer load times when a user clicks a marker icon more code?? more queries to the database

    Read the article

  • Static classes and/or singletons -- How many does it take to become a code smell?

    - by Earlz
    In my projects I use quite a lot of static classes. These are usually classes that naturally seem to fit into a single-instance type of thing. Many times I use static classes and recently I've started using some singletons. How many of these does it take to become a code smell? For instance, in my recent project which has a lot of static classes is an Authentication library for ASP.Net. I use a static class for a helper class that fixes ASP.Net error codes so it can be used like CustomErrorsFixer.Fix(Context); Or my authentication class itself is a static class //in global.asax's begin_application Authentication.SomeState="blah"; Authentication.SomeOption=true; //etc //in global.asax's begin_request Authentication.Authenticate(); When are static or singleton classes bad to use? Am I doing it wrong, or am I just in a project that by definition has very little per-instance state associated with it? The only per-instance state I have is stored in HttpContext.Current.Items like so: /// <summary> /// The current user logged in for the HTTP request. If there is not a user logged in, this will be null. /// </summary> public static UserData CurrentUser{ get{ return HttpContext.Current.Items["fscauth_currentuser"] as UserData; //use HttpContext.Current as a little place to persist static data for this request } private set{ HttpContext.Current.Items["fscauth_currentuser"]=value; } }

    Read the article

  • Can't customize the application switcher

    - by Byron Hawkins
    I'm trying to change the shortcut for switching applications from Alt+Tab to Ctrl+Tab, but changes in the System Settings > Keyboard > Shortcuts > Navigation are being ignored. Even if I disable the Switch applications shortcut entirely, it still maps to Alt+Tab. At one point I had some keys remapped with xmodmap, but since then I have deleted that configuration and rebooted. My keyboard layout settings are all defaults. The OS is Ubuntu 12.04, with all software updated this afternoon. It would also be nice to have the variation of application switching that includes one icon per window, instead of one icon per application, since I am frequently switching between windows of an application and would rather not wait for the switcher to realize I want a different window of the current application. My other Ubuntu 12.04 install has the application switcher that I like, but I'm not sure how to choose between them on this install (which was set up by my university department). Thanks if anyone can help me with this.

    Read the article

  • Memory concerns while plotting escape from DLL Hell in Delphi

    - by Peter Turner
    I work on a program with about 50 DLLs that are loaded from one executable, it's an old organically grown program where the only rationale for creating a new DLL is that one previously didn't exist to fill a given need. (and namespaces didn't exist in Delphi so it never crossed our mind to make dll1.main.pas, dll2.main.pas or something even more unique) What we want to do is consolidate all these DLLs into one executable, since none of them are used out of the program, there shouldn't be much of a problem. The concern my boss has is that if we did this, the memory overhead for terminal server clients would go through the roof. So, I've stepped through enough initialization code to know that lots of stuff is done every time a DLL is loaded in to memory, but say I've got a project with about 4000 files, and 50 dlls, 10 of which are probably utilized by any one user in any one session of the program. The 50 dlls are about 2/3rds form files, if not more, but beyond that there's not a lot of other resources being loaded (only a few embedded pictures, icons, cursors, etc..). If I loaded all these files in to memory, how much memory is used per unit? how much is used per class? How do I keep the overhead down? and what is the biggest project one can reasonably expect to build with Delphi? This tidbit won't help answering, but I think it might clarify what my boss is worried about, we currently start our program at about 18megs, normal working conditions are usually less than 40 megs, he thinks it could climb as high as 120 megs.

    Read the article

  • Ubuntu installation on iMac

    - by Shanew
    I have an iMac configured as follows - Screen 27" CPU 3.4 GHz i7 Graphics AMD Radeon HD 6970 1024 MB I downloaded Ubuntu version 11.10 64 bit ISO and burnt that to both DVD and USB stick as per the instructions on Ubuntu's download page. Neither will boot. Symptoms are as follows - DVD: When the iMac is restarted and booted from DVD (labelled Windows which isn't mentioned in Ubuntu's website instructions) one line is displayed against a black screen displaying a message about the developer and date. After 5 minutes the message hangs and the DVD ceases to spin. USB Stick: Strangely I have to select the EFI boot CD icon which appears after holding down the Alt key. A text menu appears offering me to try Ubuntu without installing. I select this and the screen goes blank and stays blank. Any ideas? Lastly, after writing Ubuntu to DVD and USB stick, neither could be read by OSX making the instructions to eject them as per Ubuntu website's instructions useless. This might help? Thanks, Shane.

    Read the article

  • Sales & Marketing Summit 2012. Lo avete perso? Rimediamo subito.

    - by Silvia Valgoi
    Lo scorso 28 marzo si è svolto l'appuntamento dedicato alle aziende che vogliono  ripensare i processi di Vendita, Marketing e Supporto alla Clientela, facendo leva sui nuovi paradigmi quali Social Networking , Web 2.0, e-commerce, mobilità e multinacanalità, Cloud computing. Dello straordinario intervento del Prof. Enrico Finzi sul valore dell'Innovazione e sui 10 fattori di Leadership indispensabili per mantenere la prioria competività sul mercato, soprattutto nei periodi storici negativi, non abbiamo documentazione (è stato fatto a braccio) ma a presto pubblicheremo una sua intervista. Nella documentazione potete ritrovare i temi trattati durante gli speech in plenaria e le sessioni di approfondimento  Oracle Fusion CRM, la soluzione di nuova generazione per migliorare e incrementare l'efficacia dei processi di Vendita e Marketing. Oracle Sales & Marketing Summit - Fusion CRM View more presentations from Oracle Apps - Italia . I processi più innovativi di Customer Experience. Oracle Sales & Marketign Summit - Customer Experience View more presentations from Oracle Apps - Italia . Ask the Expert: e-commerce, Ask the Expert: knowledge management, Oracle Sales & Marketing Summit - Knowledge Management View more presentations from Oracle Apps - Italia . Ask the Expert:marketing & loyalty, Oracle Sales & Marketing Summit - Marketing & Loyalty View more presentations from Oracle Apps - Italia . Ask the Expert: Policy Automation  Ask the Expert: Fusion CRM Oracle Sales & Marketing Summit: Fusion CRM Demo View more presentations from Oracle Apps - Italia .

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >