Search Results

Search found 1036 results on 42 pages for 'the anti 9'.

Page 37/42 | < Previous Page | 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Java - Is this a bad design pattern?

    - by Walter White
    Hi all, In our application, I have seen code written like this: User.java (User entity) public class User { protected String firstName; protected String lastName; ... getters/setters (regular POJO) } UserSearchCommand { protected List<User> users; protected int currentPage; protected int sortColumnIndex; protected SortOder sortOrder; // the current user we're editing, if at all protected User user; public String getFirstName() {return(user.getFirstName());} public String getLastName() {return(user.getLastName());} } Now, from my experience, this pattern or anti-pattern looks bad to me. For one, we're mixing several concerns together. While they're all user-related, it deviates from typical POJO design. If we're going to go this route, then shouldn't we do this instead? UserSearchCommand { protected List<User> users; protected int currentPage; protected int sortColumnIndex; protected SortOder sortOrder; // the current user we're editing, if at all protected User user; public User getUser() {return(user);} } Simply return the user object, and then we can call whatever methods on it as we wish? Since this is quite different from typical bean development, JSR 303, bean validation doesn't work for this model and we have to write validators for every bean. Does anyone else see anything wrong with this design pattern or am I just being picky as a developer? Walter

    Read the article

  • How to determine if "html" or "body" scrolls the window.

    - by David Murdoch
    The code below is used to find the element that can be scrolled (body or html) via javascript. var scrollElement = (function (tags) { var el, $el, init; // iterate through the tags... while (el = tags.pop()) { $el = $(el); // if the scrollTop value is already > 0 then this element will work if ( $el.scrollTop() > 0){ return $el; } // if scrollTop is 0 try to scroll. else if($el.scrollTop( 1 ).scrollTop() > 0) { // if that worked reset the scroll top and return the element return $el.scrollTop(0); } } return $(); } (["html", "body"])); // do stuff with scrollElement...like: // scrollElement.animate({"scrollTop":target.offset().top},1000); This code works perfectly when the height of the document is greater than the height of the window. However, when the height of the document is the same or less than the window the method above will not work because scrollTop() will always be equal to 0. This becomes a problem if the DOM is updated and the height of the document grows beyond the height of the window after the code runs. Also, I generally don't wait until document.ready to set up my javascript handlers (this generally works). I could append a tall div to the body temporarily to force the method above to work BUT that would required the document to be ready in IE (you can't add a node to the body element before the tag is closed). For more reading on the document.ready "anti-pattern" topic read this. So, I'd love to find a solution that finds the scrollable element even when the document is short. Any ideas?

    Read the article

  • IIS, Web services, Time out error

    - by Eduard
    Hello, We’ve got problem with ASP.NET web application that uses web services of other system. I’ll describe our system architecture: we have web application and Windows services that uses the same web services. - Windows service works all the time and sends information to these web services once an hour. - Web application is designed for users to send the same information in manual behavior. The problem is when user sometimes tries to send information in manual behavior in the web application, .NET throws exception „The operation has timed out” (web?). At that time Windows service successfully sends all necessary information to these web services. IT stuff that supports these web services asserts that there was no any request from our web application at that time. Then we have restarted IIS (iisreset) and everything has started to work fine. This situation repeats all the time. There is no anti-virus or firewall on the server. My suggestion is that there is something wrong with IIS, patches, configuration or whatever? The only specific thing is that there are requests that can least 2 minutes (web service response wait time). We tried to reproduce this situation on our local test servers, but everything works fine. OS: Windows Server 2003 R2 .NET: 3.5

    Read the article

  • DirectShow Filter I wrote dies after 10-24 seconds in Skype video call

    - by Robert Oschler
    I've written a DirectShow push filter for use with Skype using Delphi Pro 6 and the DSPACK DirectShow library. In preview mode, when you test a video input device in the Skype client Video Settings window, my filter works flawlessly. I can leave it up and running for many minutes without an error. However when I start a video call after 10 to 24 seconds, never longer, the video feed freezes. The call continues fine with the call duration counter clicking away the seconds, but the video feed is dead, stuck on whatever frame the freeze happened (although after a long while it turns black which I believe means Skype has given up on the filter). I tried attaching to the process from my debugger with a breakpoint literally set on every method call and none of them are hit once the freeze takes place. It's as if the thread that makes the DirectShow FillBuffer() call to my filter on behalf of Skype is dead or has been shutdown. I can't trace my filter in the debugger because during a Skype call I get weird int 1 and int 3 debugger hard interrupt calls when a Skype video call is in progress. This behavior happens even with my standard web cam input device selected and my DirectShow filter completely unregistered as a ActiveX server. I suspect it might be some "anti-debugging" code since it doesn't happen in video input preview mode. Either way, that is why I had to attach to the process after the fact to see if my FillBuffer() called was still being called and instead discovered that appears to be dead. Note, my plain vanilla USB web cam's DirectShow filter does not exhibit the freezing behavior and works fine for many minutes. There's something about my filter that Skype doesn't like. I've tried Sleep() statements of varying intervals, no Sleep statements, doing virtually nothing in the FillBuffer() call. Nothing helps. If anyone has any ideas on what might be the culprit here, I'd like to know. Thanks, Robert

    Read the article

  • I didn't say anything treasonous, quit putting words in my mouth

    - by You guys Lie
    Where at all did I say ANYTHING about organizing some kind of anti-government activity? Nowhere. I do not give a flying fuck about America in case you hadn't realized, I'm talking about what I'm going to do when Jesus Christ returns. Besides, why would I make it easy for them to throw me in a black site prison? Use common sense, sheeple. In the end I guess it doesn't matter anyways, since I do not recognize the government as my ultimate authority. Police/Army/Whatever-- funny outfits and a shiny badge don't make you better than me. Allah will do away with your kind. However, as long as they're with me (as they semi-currently are), we have no problems. I fear the government will have other plans to control the population when things start to further decline, and that is when they will run into problems with me and mine, and probably a large percentage of the public. You'd have to be a fucking fool to think we'd fall into anarchy immediately, given the vast resources and blind loyalty of this country. Saying there are practical limits to free speech for anything I have said in this thread is not only ignorant, it is unpatriotic. I should be being celebrated as the modern day Paul Revere for warning you people about our impending doom. You would do well to study the foundations of this country. Nothing I have said is treasonous at all. Besides, the DoD knows I'm harmless until shit pops off, they have bigger fish to fry right now, go forward. Keep in mind, I don't personally have to do anything to overthrow the government. I just said, I would not advocate for any paramilitary organizations at this time, only if/after we dissolve into (more) tyranny. I am not a terrorist, like some soldiers in Iraq. The machine is doing a great job of ruining itself, while I get to comfortably laugh at it on the nightly news knowing that I'm ready for it to shut down.

    Read the article

  • Open Source: Why not release into Public Domain?

    - by Goosey
    I have recently been wondering why so little code is ever released as 'Public Domain'. MIT and BSD licenses are becoming extremely popular and practically only have the restriction of license propagation. The reasons I can think of so far are: Credit - aka Prestige, Street-cred, 'Props', etc. Authors don't want usage of the code restricted, but they also want credit for creating the code. Two problems with this reason. I have seen projects copy/paste the MIT or BSD license without adding the 'Copyright InsertNameHere' thereby making it a tag-along license that doesn't give them credit. I have talked to authors who say they don't care about people giving them credit, they just want people to use their code. Public Domain would make it easier for people to do so. License Change - IANAL, but I believe by licensing their code, even with an extremely nonrestrictive license, this means they can change the license on a later revision? This reason is not good for explaining most BSD/MIT licensed code which seems to have no intent of ever becoming more restrictive. AS IS - All licenses seem to have the SCREAMING CAPS declaration saying that the software is 'as is' and that the author offers no implied or express warranty. IANAL, but isn't this implied in public domain? Am I missing some compelling reason? The authors I have talked to about this basically said something along the lines of "BSD/MIT just seems like what you do, no one does public domain". Is this groupthink in action, or is there a compelling anti-public domain argument? Thanks EDIT: I am specifically asking about Public Domain vs BSD/MIT/OtherEquallyUnrestrictiveLicense. Not GPL. Please understand what these licenses allow, and this includes: Selling the work, changing the work and not 'giving the changes back', and incorporating the work in a differently (such as commercially) licensed work. Thank You to everyone who has replied who understands what BSD/MIT means.

    Read the article

  • Avoiding anemic domain model - a real example

    - by cbp
    I am trying to understand Anemic Domain Models and why they are supposedly an anti-pattern. Here is a real world example. I have an Employee class, which has a ton of properties - name, gender, username, etc public class Employee { public string Name { get; set; } public string Gender { get; set; } public string Username { get; set; } // Etc.. mostly getters and setters } Next we have a system that involves rotating incoming phone calls and website enquiries (known as 'leads') evenly amongst sales staff. This system is quite complex as it involves round-robining enquiries, checking for holidays, employee preferences etc. So this system is currently seperated out into a service: EmployeeLeadRotationService. public class EmployeeLeadRotationService : IEmployeeLeadRotationService { private IEmployeeRepository _employeeRepository; // ...plus lots of other injected repositories and services public void SelectEmployee(ILead lead) { // Etc. lots of complex logic } } Then on the backside of our website enquiry form we have code like this: public void SubmitForm() { var lead = CreateLeadFromFormInput(); var selectedEmployee = Kernel.Get<IEmployeeLeadRotationService>() .SelectEmployee(lead); Response.Write(employee.Name + " will handle your enquiry. Thanks."); } I don't really encounter many problems with this approach, but supposedly this is something that I should run screaming from because it is an Anemic Domain Model. But for me its not clear where the logic in the lead rotation service should go. Should it go in the lead? Should it go in the employee? What about all the injected repositories etc that the rotation service requires - how would they be injected into the employee, given that most of the time when dealing with an employee we don't need any of these repositories?

    Read the article

  • How much of STL is too much?

    - by Darius Kucinskas
    I am using a lot of STL code with std::for_each, bind, and so on, but I noticed that sometimes STL usage is not good idea. For example if you have a std::vector and want to do one action on each item of the vector, your first idea is to use this: std::for_each(vec.begin(), vec.end(), Foo()) and it is elegant and ok, for a while. But then comes the first set of bug reports and you have to modify code. Now you should add parameter to call Foo(), so now it becomes: std::for_each(vec.begin(), vec.end(), std::bind2nd(Foo(), X)) but that is only temporary solution. Now the project is maturing and you understand business logic much better and you want to add new modifications to code. It is at this point that you realize that you should use old good: for(std::vector::iterator it = vec.begin(); it != vec.end(); ++it) Is this happening only to me? Do you recognise this kind of pattern in your code? Have you experience similar anti-patterns using STL?

    Read the article

  • Email not sent until application closes

    - by Tester101
    I have an application that uses SmtpClient to send E-Mail, but the E-Mails are not sent until the application closes. I have searched and searched to find a solution to the problem, but I am not able to find one. The system does have Symantec anti-virus installed, which could possibly be the problem. Does anybody have a solution to this problem? Here is the code I am using. public class EMail { private string server; public string Server {get{return this.server;}set{this.server = value;}} private string to; public string To {get{return this.to;}set{this.to = value;}} private string from; public string From {get{return this.from;}set{this.from = value;}} private string subject; public string Subject {get{return this.subject;}set{this.subject = value;}} private string body; public string Body {get{return this.body;}set{this.body = value;}} public EMail() {} public EMail(string _server, string _to, string _from, string _subject, string _body) { this.Server = _server; this.To = _to; this.From = _from; this.Subject = _subject; this.Body = _body; } public void Send() { System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage(this.From, this.To, this.Subject, this.Body); message.IsBodyHtml = true; System.Net.Mail.SmtpClient client = new System.Net.Mail.SmtpClient(this.Server); client.DeliveryMethod = System.Net.Mail.SmtpDeliveryMethod.Network; //I have tried this, but it still does not work. //client.ServicePoint.ConnectionLeaseTimeout = 0; try { client.Send(message); } catch(System.Exception ex) { System.Windows.Forms.MessageBox.Show(ex.ToString()); } message.Dispose(); } }

    Read the article

  • android & libgdx - disable blurry images rendering

    - by android developer
    i'm trying out libgdx as an opengl wrapper , and i have some issues with its graphical rendering : for some reason , all images (textures) on android device look a little blurred using libgdx . this also includes text (font) . however, for normal images , even though i show the entire image , i expect it to look as sharp as i see it on a computer , especially if i have such a good screen on the device (it's galaxy nexus) . i've tried to set the anti-aliasing off , by using the next code : final AndroidApplicationConfiguration androidApplicationConfiguration=new AndroidApplicationConfiguration(); androidApplicationConfiguration.numSamples=0; //tried the value of 1 too. ... i've also tried to set the scaling method to various methods , but with no luck. example: texture.setFilter(TextureFilter.Nearest,TextureFilter.Nearest); as a test , i've found a sharp image that is exactly the same as the seen resolution on the device (720x1184 for galaxy nexus , because of the buttons bar) , and i've put it to be on the background of the libgdx app . of course , i had to add extra blank space in order for the texute to be loaded , so the final size of the image (which will include content and empty space) is still a power of 2 for both width and height (1024x2048 in my case) . on the desktop app , it look ok . on the device , it looked blurred. a weird thing that i've noticed is that when i change the device's orientation (horizontal <= vertical) , for the very short time before the rotating animation starts , i see both the image and the text very well . can anyone please help me?

    Read the article

  • Can the Singleton be replaced by Factory?

    - by lostiniceland
    Hello Everyone There are already quite some posts about the Singleton-Pattern around, but I would like to start another one on this topic since I would like to know if the Factory-Pattern would be the right approach to remove this "anti-pattern". In the past I used the singleton quite a lot, also did my fellow collegues since it is so easy to use. For example, the Eclipse IDE or better its workbench-model makes heavy usage of singletons as well. It was due to some posts about E4 (the next big Eclipse version) that made me start to rethink the singleton. The bottom line was that due to this singletons the dependecies in Eclipse 3.x are tightly coupled. Lets assume I want to get rid of all singletons completely and instead use factories. My thoughts were as follows: hide complexity less coupling I have control over how many instances are created (just store the reference I a private field of the factory) mock the factory for testing (with Dependency Injection) when it is behind an interface In some cases the factories can make more than one singleton obsolete (depending on business logic/component composition) Does this make sense? If not, please give good reasons for why you think so. An alternative solution is also appreciated. Thanks Marc

    Read the article

  • Zoom image to pixel level

    - by zaf
    For an art project, one of the things I'll be doing is zooming in on an image to a particular pixel. I've been rubbing my chin and would love some advice on how to proceed. Here are the input parameters: Screen: sw - screen width sh - screen height Image: iw - image width ih - image height Pixel: px - x position of pixel in image py - y position of pixel in image Zoom: zf - zoom factor (0.0 to 1.0) Background colour: bc - background colour to use when screen and image aspect ratios are different Outputs: The zoomed image (no anti-aliasing) The screen position/dimensions of the pixel we are zooming to. When zf is 0 the image must fit the screen with correct aspect ratio. When zf is 1 the selected pixel fits the screen with correct aspect ratio. One idea I had was to use something like povray and move the camera towards a big image texture or some library (e.g. pygame) to do the zooming. Anyone think of something more clever with simple pseudo code? To keep it more simple you can make the image and screen have the same aspect ratio. I can live with that. I'll update with more info as its required.

    Read the article

  • How to create dynamic panel layout for this logo creation wizard ?

    - by Rebol Tutorial
    I want to create a wizard for the logo badge below with 3 parameters. I can make the title dynamic but for image and gradient it's hardcoded because I can't see how to make them dynamic. Code follows after pictures: custom-styles: stylize [ lab: label 60x20 right bold middle font-size 11 btn: button 64x20 font-size 11 edge [size: 1x1] fld: field 200x20 font-size 11 middle edge [size: 1x1] inf: info font-size 11 middle edge [size: 1x1] ari: field wrap font-size 11 edge [size: 1x1] with [flags: [field tabbed]] ] panel1: layout/size [ origin 0 space 2x2 across styles custom-styles h3 "Parameters" font-size 14 return lab "Title" fld_title: fld "EXPERIMENT" return lab "Logo" fld_logo: fld "http://www.rebol.com/graphics/reb-logo.gif" return lab "Gradient" fld_gradient: fld "5 55 5 10 10 71.0.6 30.10.10 71.0.6" ] 278x170 panel2: layout/size [ ;layout (window client area) size is 278x170 at the end of the spec block at 0x0 ;put the banner on the top left corner box 278x170 effect [ ; default box face size is 100x100 draw [ anti-alias on line-width 2.5 ; number of pixels in width of the border pen black ; color of the edge of the next draw element fill-pen radial 100x50 5 55 5 10 10 71.0.6 30.10.10 71.0.6 ; the draw element box ; another box drawn as an effect 15 ; size of rounding in pixels 0x0 ; upper left corner 278x170 ; lower right corner ] ] pad 30x-150 Text fld_title/text font [name: "Impact" size: 24 color: white] image http://www.rebol.com/graphics/reb-logo.gif ] 278x170 main: layout [ vh2 "Logo Badge Wizard" guide pad 20 button "Parameters" [panels/pane: panel1 show panels ] button "Rendering" [show panel2 panels/pane: panel2 show panels] button "Quit" [Unview] return box 2x170 maroon return panels: box 278x170 ] panel1/offset: 0x0 panel2/offset: 0x0 panels/pane: panel1 view main

    Read the article

  • Is jQuery forcing Adobe ColdFusion to abandon the dead flash product line?

    - by crosenblum
    I have been reading a lot about how flash development/design had died, and as jQuery and in the near future html5 comes out, will this start to push Adobe/Coldfusion away from flash towards less product linking? I mean, I love coldfusion, and want that to continue to grow, however, if Adobe only bought Coldfusion from Macromedia, so they can bundle flash and coldfusion together, does the death of flash mean the death of coldfusion? http://topnews.us/content/221385-jobs-says-adobes-flash-waning-and-had-its-day http://aext.net/2010/03/javascript-jquery-killing-flash-tutorial-jquery-plugin/ I really don't mind if Flash dies, I do mind greatly if coldfusion does. Is the success of Flash linked to Coldfusion? If so, why? or why not? The purpose of this isn't to start some war about flash pro's and con's. I was only worried that Adobe would cause problems for Coldfusion, if flash had some market/financial problems. That was my main concern... And no I am not anti-flash... But my financial sanity depends on Coldfusion being a success, so that is why I stated my question. Because I WANT EVERYONE ELSE'S OPINION OF THIS SITUATION. Thank You.

    Read the article

  • Powerpoint displays a "can't start the application" error when an Excel Chart object is embedded in

    - by A9S6
    This is a very common problem when Excel Worksheet or Chart is embedded into Word or Powerpoint. I am seeing this problem in both Word and Powerpoint and the reason it seems is the COM addin attached to Excel. The COM addin is written in C# (.NET). See the attached images for error dialogs. I debugged the addin and found a very strange behavior. The OnConnection(...), OnDisConnection(...) etc methods in the COM addin works fine until I add an event handler to the code. i.e. handle the Worksheet_SheetChange, SelectionChange or any similar event available in Excel. As soon as I add even a single event handler (though my code has several), Word and Powerpoint start complaining and do not Activate the embedded object. On some of the posts on the internet, people have been asked to remove the anti-virus addins for office (none in my case) so this makes me believe that the problem is somewhat related to COM addins which are loaded when the host app activates the object. Does anyone have any idea of whats happening here?

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • How to best future proof my application that needs to connect to Outlook?

    - by Troy
    I have a contact management application written in Delphi which has a “Sync with Outlook” feature that I developed 10 years ago. Now, I’m going back to add some features and fix some bugs. This sync feature uses the Outlook object model to get started, but it has an optional mode called “Use MAPI Enhancements” where it uses pure MAPI to speed up how it looks for changes, and it allows notes to be synced w/ RTF instead of just plain text. I'm wondering if supporting two parallel paths of execution is a good idea or not. If I went with all MAPI, I believe I'd avoid some security prompts, and I'd avoid situations where anti-virus has "script-blocking" features which block my app from connecting to Outlook. But I believe that on the down side, my 32-bit app would not be able to to connect with 64-bit Outlook 2010 using MAPI. And I wonder about the future of MAPI in general. If I stick with the Outlook object model, will my 32-bit app be able to connect to the Outlook object model (since it's out of process COM)? If so, this is a compelling reason to keep my Outlook object model execution path in place. But if not, and if my app needs to be compiled for x64, then why not just go with pure MAPI?

    Read the article

  • Injecting Dependencies into Domain Model classes with Nhibernate (ASP.NET MVC + IOC)

    - by Sunday Ironfoot
    I'm building an ASP.NET MVC application that uses a DDD (Domain Driven Design) approach with database access handled by NHibernate. I have domain model class (Administrator) that I want to inject a dependency into via an IOC Container such as Castle Windsor, something like this: public class Administrator { public virtual int Id { get; set; } //.. snip ..// public virtual string HashedPassword { get; protected set; } public void SetPassword(string plainTextPassword) { IHashingService hasher = IocContainer.Resolve<IHashingService>(); this.HashedPassword = hasher.Hash(plainTextPassword); } } I basically want to inject IHashingService for the SetPassword method without calling the IOC Container directly (because this is suppose to be an IOC Anti-pattern). But I'm not sure how to go about doing it. My Administrator object either gets instantiated via new Administrator(); or it gets loaded via NHibernate, so how would I inject the IHashingService into the Administrator class? On second thoughts, am I going about this the right way? I was hoping to avoid having my codebase littered with... currentAdmin.Password = HashUtils.Hash(password, Algorithm.Sha512); ...and instead get the domain model itself to take care of hashing and neatly encapsulate it away. I can envisage another developer accidently choosing the wrong algorithm and having some passwords as Sha512, and some as MD5, some with one salt, and some with a different salt etc. etc. Instead if developers are writing... currentAdmin.SetPassword(password); ...then that would hide those details away and take care of those problems listed above would it not?

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • Indentation control while developing a small python like language

    - by sap
    Hello, I'm developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for indentation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop that's inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 'hello world'!! #will never reach this. "!!" outputs with a newline end 'hello world again'!! #also will never reach this. again "!!" used for cout end #after break 2 it would jump right here but since I don't have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: I would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then every time i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the indentation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesn't use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appreciated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on Google and had no luck). thanks in advance.

    Read the article

  • Matab - Trace contour line between two different points

    - by Graham
    Hi, I have a set of points represented as a 2 row by n column matrix. These points make up a connected boundary or edge. I require a function that traces this contour from a start point P1 and stop at an end point P2. It also needs to be able trace the contour in a clockwise or anti-clockwise direction. I was wondering if this can be achieved by using some of matlabs functions. I have tried to write my own function but this was riddled with bugs and I have also tried using bwtraceboundary and indexing however this has problematic results as the points within the matrix are not in the order that create the contour. Thank you in advance for any help. Btw, I have included a link to a plot of the set of points. It is half the outline of a hand. The function would ideally trace the contour from ether the red star to the green triangle. Returning the points in order of traversal.

    Read the article

  • how does one _model_ data from relational databases in clojure ?

    - by sandeep
    I have asked this question on twitter as well the #clojure IRC channel, yet got no responses. There have been several articles about Clojure-for-Ruby-programmers, Clojure-for-lisp-programmers.. but what is the missing part is Clojure for ActiveRecord programmers . There have been articles about interacting with MongoDB, Redis, etc. - but these are key value stores at the end of the day. However, coming from a Rails background, we are used to thinking about databases in terms of inheritance - has_many, polymorphic, belongs_to, etc. The few articles about Clojure/Compojure + MySQL (ffclassic) - delve right into sql. Of course, it might be that an ORM induces impedence mismatch, but the fact remains that after thinking like ActiveRecord, it is very difficult to think any other way. I believe that relational DBs, lend themselves very well to the object-oriented paradigm because of them being , essentially, Sets. Stuff like activerecord is very well suited for modelling this data. For e.g. a blog - simply put class Post < ActiveRecord::Base has_many :comments end class Comment < ActiveRecord::Base belongs_to :post end How does one model this in Clojure - which is so strictly anti-OO ? Perhaps the question would have been better if it referred to all functional programming languages, but I am more interested from a Clojure standpoint (and Clojure examples)

    Read the article

  • Where is rebol fill-pen documented (to get glow effect on a round rectangle) ?

    - by Rebol Tutorial
    There is some discussion here about fill-pen http://www.mail-archive.com/[email protected]/msg02019.html But I can't see documentation about cubic, diamond, etc... effect for fill-pen in rebol's official doc ? I'm trying to draw some round rectangle with glowing effect but don't really understand the parameters I'm playing with so I can't get exactly what I'd like (I'd like the glow effect starting from the center not from the dark left top corner): view layout [ box 278x185 effect [ ; default box face size is 100x100 draw [ anti-alias on ; information for the next draw element (not required) line-width 2.5 ; number of pixels in width of the border pen black ; color of the edge of the next draw element ; fill pen is a little complex: ;fill-pen 10x10 0 90 0 1 1 0.0.0 255.0.0 255.0.255 fill-pen radial 20x20 5 55 5 5 10 0.0.0 55.0.5 55.0.5 ; the draw element box ; another box drawn as an effect 15 ; size of rounding in pixels 0x0 ; upper left corner 278x170 ; lower right corner ] ] ]

    Read the article

  • strcmp() but with 0-9 AFTER A-Z? (C/C++)

    - by Aaron
    For reasons I completely disagree with but "The Powers (of Anti-Usability) That Be" continue to decree despite my objections, I have a sorting routine which does basic strcmp() compares to sort by its name. It works great; it's hard to get that one wrong. However, at the 11th hour, it's been decided that entries which begin with a number should come AFTER entries which begin with a letter, contrary to the ASCII ordering. They cite the EBCDIC standard has numbers following letters so the prior assumption isn't a universal truth, and I have no power to win this argument... but I digress. Therein lies my problem. I've replaced all appropriate references to strcmp with a new function call nonstd_strcmp, and now need to implement the modifications to accomplish the sort change. I've used a FreeBSD source as my base: http://freebsd.active-venture.com/FreeBSD-srctree/newsrc/libkern/strncmp.c.html if (n == 0) return (0); do { if (*s1 != *s2++) return (*(const unsigned char *)s1 - *(const unsigned char *)(s2 - 1)); if (*s1++ == 0) break; } while (--n != 0); return (0); I guess I might need to take some time away to really think about how it should be done, but I'm sure I'm not the only one who's experienced the brain-deadness of just-before-release spec changes.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42  | Next Page >