Search Results

Search found 14719 results on 589 pages for 'optimization level'.

Page 513/589 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • All connections in pool are in use

    - by veljkoz
    We currently have a little situation on our hands - it seems that someone, somewhere forgot to close the connection in code. Result is that the pool of connections is relatively quickly exhausted. As a temporary patch we added Max Pool Size = 500; to our connection string on web service, and recycle pool when all connections are spent, until we figure this out. So far we have done this: SELECT SPId FROM MASTER..SysProcesses WHERE DBId = DB_ID('MyDb') and last_batch < DATEADD(MINUTE, -15, GETDATE()) to get SPID's that aren't used for 15 minutes. We're now trying to get the query that was executed last using that SPID with: DBCC INPUTBUFFER(61) but the queries displayed are various, meaning either something on base level regarding connection manipulation was broken, or our deduction is erroneous... Is there an error in our thinking here? Does the DBCC / sysprocesses give results we're expecting or is there some side-effect catch? (for example, connections in pool influence?) (please, stick to what we could find out using SQL since the guys that did the code are many and not all present right now)

    Read the article

  • .NET Embedded Manifest Crashes XP

    - by Alan Spark
    Hi, I am embedding a manifest in a .NET exe so that it can request elevated permissions in Vista and Windows 7. The manifest that I am using is as follows: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.0.0.0" name="ElevationTest" type="win32"/> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator"/> </requestedPrivileges> </security> </trustInfo> </assembly> It works as expected in Vista and Windows 7 but crashes XP with the standard "... has encountered a problem and needs to close..." error. If I don't embed any manifest then it works as expected but will obviously not have the required permissions in Vista and Windows 7. What is a standard way of producing an exe that will function with the correct permissions in XP and Vista / Windows 7? Thanks, Alan

    Read the article

  • How do I fix the alpha value after calling GDI text functions?

    - by Daniel Stutzbach
    I have a application that uses the Aero glass effect, so each pixel has an alpha value in addition to red, green, and blue values. I have one custom-draw control that has a solid white background (alpha = 255). I would like to draw solid text on the control using the GDI text functions. However, these functions set the alpha value to an arbitrary value, causing the text to translucently show whatever window is beneath my application's. After calling rendering the text, I would like to go through all of the pixels in the control and set their alpha value back to 255. What's the best way to do that? I haven't had any luck with the BitBlt, GetPixel, and SetPixel functions. They appear to be oblivious to the alpha value. Here are other solutions that I have considered and rejected: Draw to a bitmap, then copy the bitmap to the device: With this approach, the text rendering does not make use of the characteristics of the monitor (e.g., ClearText). Use GDI+ for text rendering: This application originally used GDI+ for text rendering (before I started working on Aero support). I switched to GDI because of difficulties I encountered trying to accurately measure strings with GDI+. I'd rather not switch back. Set the Aero region to avoid the control in question: My application's window is actually a child window of a different application running in a different process. I don't have direct control over the Aero settings on the top-level window. The application is written in C# using Windows Forms, though I'm not above using Interop to call Win32 API functions.

    Read the article

  • Filtering string in Python

    - by Ecce_Homo
    I am making algorithm for checking the string (e-mail) - like "E-mail addres is valid" but their are rules. First part of e-mail has to be string that has 1-8 characters (can contain alphabet, numbers, underscore [ _ ]...all the parts that e-mail contains) and after @ the second part of e-mail has to have string that has 1-12 characters (also containing all legal expressions) and it has to end with top level domain .com EDIT email = raw_input ("Enter the e-mail address:") length = len (email) if length > 20 print "Address is too long" elif lenght < 5: print "Address is too short" if not email.endswith (".com"): print "Address doesn't contain correct domain ending" first_part = len (splitting[0]) second_part = len(splitting[1]) account = splitting[0] domain = splitting[1] for c in account: if c not in "abcdefghijklmopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.": print "Invalid char", "->", c,"<-", "in account name of e-mail" for c in domain: if c not in "abcdefghijklmopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.": print "Invalid char", "->", c,"<-", "in domain of e-mail" if first_part == 0: print "You need at least 1 character before the @" elif first_part> 8: print "The first part is too long" if second_part == 4: print "You need at least 1 character after the @" elif second_part> 16: print "The second part is too long" else: # if everything is fine return this print "E-mail addres is valid" EDIT: After reproting what is wrong with our input, now I need to make Python recognize valid address and return ("E-mail adress is valid") This is the best i can do with my knowledge....and we cant use regular expressions, teacher said we are going to learn them later.

    Read the article

  • What's the "correct way" to organize this project?

    - by user571747
    I'm working on a project that allows multiple users to submit large data files and perform operations on them. The "backend" which performs these operations is written in Perl while the "frontend" uses PHP to load HTML template files and determines which content to deliver. Data is stored in a database (MySQL, SQLite, Oracle) and while there is data which has not yet been acted upon, Perl adds it to a running queue which delivers data to other threads based on system load. In addition, there may be pre- and post-processing of the data before and after the main Perl script operates (the specifications are unclear) so I may want to allow these processors to be user-selectable plugins. I had been writing this project in a more procedural fashion but I am quickly realizing the benefit of separating concerns as to limit the scope one change has on the rest of the project. I'm quite unexperienced with design patterns and am curious what the best way to proceed is. I've heard MVC thrown around quite a bit but I am unsure of how to apply it. Specifically, what are some good options to structure this code (in terms of design patterns and folder hierarchy)? How can I achieve this with both PHP and Perl while minimizing duplicated code between languages? Should I keep my PHP files in the top level so I don't have ugly paths in the URL? Also, if I want to provide interchangeable databases, does each table need its own DAO implementation?

    Read the article

  • VS2010 and CSS: What is the best way to position a single form control

    - by George
    OK, I have a ton of controls on my page that I need to individually place. I need to set a margin here, a padding there, etc. None of these particular styles that I want to apply will be applied to more than control. What is the bets practice for determining at which level the style is placed, etc? OK, my choices are 1) External CSS file 1A) Using ClientIdMode = Auto (the default) I could assign a unique CssClass value to the ASP.NET control and, in the external CSS file, create a class selector that would only be applied to that one control. 1B) User Client ID = Predicatable In the external CSS file, I could determine what the ID will be for the controls of interest and create an ID selector (#ControlID{Style} ). However, I fear maintenance issues due to including/removing parent containers that would cause the ID to change. 1C) User Client ID = Static. I could choose static IDs for the controls such that I minimize the likelihood of a clash with auto generated IDs (perhaps by prefixing the ID with "StaticID_" and use an external stylesheet with ID selectors. 2) I could place the style right on the control. The only disadvantage here, as I see it, is that style info is brought down each time instead of being cached , which is what I'd get using an external CSS. If a style isn't resused, I personally don't see much benefit to placing it in an external file, though please explain why if you disagree. Is there moire of a reason that "It's nice to have all the CSS in one place?"

    Read the article

  • Handling primary key duplicates in a data warehouse load

    - by Meff
    I'm currently building an ETL system to load a data warehouse from a transactional system. The grain of my fact table is the transaction level. In order to ensure I don't load duplicate rows I've put a primary key on the fact table, which is the transaction ID. I've encountered a problem with transactions being reversed - In the transactional database this is done via a status, which I pick up and I can work out if the transaction is being done, or rolled back so I can load a reversal row in the warehouse. However, the reversal row will have the same transaction ID and so I get a primary key violation. I've solved this for now by negating the primary key, so transaction ID 1 would be a payment, and transaction ID -1 (In the warehouse only) would be the reversal. I have considered an alternative of generating a BIT column, where 0 is normal and 1 is reversal, then making the PK the transaction ID and the BIT column. My question is, is this a good practice, and has anyone else encountered anything like this? For reference, this is a payment processing system, so values will not be modified, so there will only ever be transactions and reversals.

    Read the article

  • Is there a quality, file-size, or other benefit to JPEG sizes being multiples of 8px or 16px?

    - by davebug
    The JPEG compression encoding process splits a given image into blocks of 8x8 pixels, working with these blocks in future lossy and lossless compressions. [source] It is also mentioned that if the image is a multiple 1MCU block (defined as a Minimum Coded Unit, 'usually 16 pixels in both directions') that lossless alterations to a JPEG can be performed. [source] I am working with product images and would like to know both if, and how much benefit can be derived from using multiples of 16 in my final image size (say, using an image with size 480px by 360px) vs. a non-multiple of 16 (such as 484x362). In this example I am not interested in further alterations, editing, or recompression of the final image. To try to get closer to a specific answer where I know there must be largely generalities: Given a 480x360 image that is 64k and saved at maximum quality in Photoshop [example]: Can I expect any quality loss from an image that is 484x362 What amount of file size addition can I expect (for this example, the additional space would be white pixels) Are there any other disadvantages to growing larger than the 8px grid? I know it's arbitrary to use that specific example, but it would still be helpful (for me and potentially any others pondering an image size) to understand what level of compromise I'd be dealing with in breaking the non-8px grid. The key issue here is a debate I've had is whether 8-pixel divisible images are higher quality than images that are not divisible by 8-pixels.

    Read the article

  • log activity. intrusion detection. user event notification ( interraction ). messaging

    - by Julian Davchev
    Have three questions that I somehow find related so I put them in same place. Currently building relatively large LAMP system - making use of messaging(activeMQ) , memcache and other goodies. I wonder if there are best practices or nice tips and tricks on howto implement those. System is user aware - meaning all actions done can be bind to particular logged user. 1. How to log all actions/activities of users? So that stats/graphics might be extracted later for analysing. At best that will include all url calls, post data etc etc. Meaning tons of inserts. I am thinking sending messages to activeMQ and later cron dumping in DB and cron analysing might be good idea here. Since using Zend Framework I guess I may use some request plugin so I don't have to make the log() call all over the code. 2.How to log stuff so may be used for intrusion detection? I know most things might be done on http level using apache mods for example but there are also specific cases like (5 failed login attempts in a row (leads to captcha) etc etc..) This also would include tons of inserts. Here I guess direct usage of memcache might be best approach as data don't seem vital to be permanantly persisted. Not sure if cannot use data from point 1. 3.System will notify users of some events. Like need approval , something broke..whatever.Some events will need feedback(action) from user, others are just informational. Wonder if there is common solutions for needs like this. Example: Based on occuring event(s) user will be notifed (user inbox for example) what happend. There will be link or something to lead him to details of thingy that happend and take action accordingly. Those seem trivial at first look but problem I see if coding it directly is becoming really fast hard to maintain.

    Read the article

  • How do I refer to a client_deploy.wsdd file that's in WEB-INF?

    - by Paul
    A basic question, but I can't seem to find the answer. I have an Axis-generated web service that also calls another web service (for which the stubs are also generated with Axis). It's deployed in weblogic 9.2 That called web service requires authentication. I've googled for the code to set up authentication. It requires that I set up a client_deploy.wsdd file which I've done, and added it to WEB-INF. I need to specify this flle to Axis. There seem to be several ways of doing this, including System.setProperty("axis.ClientConfigFile", "client_deploy.wsdd") or EngineConfiguration config = new FileProvider("client_deploy.wsdd"); but these aren't working for me. Is the issue the path for the client_deploy.wsdd file? How do I refer to a file that's at the top level of the WEB-INF directory? Googling tells me how to access it as a stream, but I don't want that, I need to pass a file name to these functions... Please point out the obvious that I have missed

    Read the article

  • Heroku DJango app development on Windows

    - by Cliff
    I'm trying to start a Django app on Heroku using Windows and I'm getting stuck on the following error when I try to pip install psycopg2: Downloading/unpacking psycopg2 Downloading psycopg2-2.4.5.tar.gz (719Kb): 719Kb downloaded Running setup.py egg_info for package psycopg2 Error: pg_config executable not found. Please add the directory containing pg_config to the PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. Complete output from command python setup.py egg_info: running egg_info creating pip-egg-info\psycopg2.egg-info writing pip-egg-info\psycopg2.egg-info\PKG-INFO writing top-level names to pip-egg-info\psycopg2.egg-info\top_level.txt writing dependency_links to pip-egg-info\psycopg2.egg-info\dependency_links.txt writing manifest file 'pip-egg-info\psycopg2.egg-info\SOURCES.txt' warning: manifest_maker: standard file '-c' not found I've googled the error and it seems you need libpq-dev python-dev as dependencies for postgres under Python. I also turned up a link that says you gt into trouble if you don't have the postgres bin folder in your Path so I installed Postgres manually and tried again. This time I get: error: Unable to find vcvarsall.bat I am still a python N00b so I am lost. Could someone point me in a general direction?

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • Design patterns for Caching Images in a MVC?

    - by Onema
    Hi, I'm designing an image cache system that will be used in an MVC CMS. The main purpose of the image cacher is to modify images: scale, crop, etc and cache them in the client site. I have created an image cache Model and Mapper that interact with the Database, to keep track of the images and know what kind of actions have been applied to them (scale, crop, etc). In addition to the Model and Mapper I have created a ImageCacher Class that is used by the API to manage the Model and image creation based on arguments passed by the client site, this class creates the images and generates the links to the images for the View. A coworker argued that I need to include the functionality of this last Class inside the Model, as the bulk of the logic should go in the model. I respectfully disagree with him since I feel the model's responsibility is to deal with the information about the images cached at the database level, and the responsibility of the ImageCacher Class is to create the url/image that we will be caching (keeping the single responsibility principle). In addition to this I believe that a model should not have Presentation-related features, like creating or showing images. Does anyone have any insight on this? is there a particular design pattern that would make this division of tasks clear and and the image cacher reusable? Should I add all the logic in the Model? Thank you.

    Read the article

  • database design - empty fields

    - by imanc
    Hey, I am currently debating an issue with a guy on my dev team. He believes that empty fields are bad news. For instance, if we have a customer details table that stores data for customers from different countries, and each country has a slightly different address configuration - plus 1-2 extra fields, e.g. French customer details may also store details for entry code, and floor/level plus title fields (madamme, etc.). South Africa would have a security number. And so on. Given that we're talking about minor variances my idea is to put all of the fields into the table and use what is needed on each form. My colleague believes we should have a separate table with extra data. E.g. customer_info_fr. But this seams to totally defeat the purpose of a combined table in the first place. His argument is that empty fields / columns is bad - but I'm struggling to find justification in terms of database design principles for or against this argument and preferred solutions. Another option is a separate mini EAV table that stores extra data with parent_id, key, val fields. Or to serialise extra data into an extra_data column in the main customer_data table. I think I am confused because what I'm discussing is not covered by 3NF which is what I would typically use as a reference for how to structure data. So my question specifically: - if you have slight variances in data for each record (1-2 different fields for instance) what is the best way to proceed?

    Read the article

  • Behavior of Struts2 and convention-plugin when there is Index(extends ActionSupport)

    - by hanishi
    We have an Action class named 'Index' immediately under com.example.common.action and is annotated @ParentPackage('default') which is declared in package directive in struts.xml and has "/" for its namespace and extends "struts-default". It also declares @Result so that it responses with jsp files corresponding the string values returned by its execute() method. In our struts.xml, the following struts setting is configured along with other necessary configurations that are needed for convention-plugin. <constant name="struts.action.extension" value=","/> When accessing /my_context/none_existing_path, the request apparently hits this Index class and the contents of the jsp declared in the Index's @Result section gets returned. However, if we provide /my_context/, we receive the following error: HTTP Status 404-There is no Action mapped for namespace[/] and action name [] associated with context path [/my_context]. We want to know the reason why accessing /my_context/none_existing_path, where none_existing_path has no matching action, can fallback to Index class, but error is returned when when the URL requested is just /my_context/. Currently, our convention-plugin settings are declared as follows: <constant name="struts.convention.package.locators.basePackage" value="com.example"/> <constant name="struts.convention.package.locators" value="action"/> Strangely, if we changed the value of the struts.convention.package.locators.basePackage to om.example.common, in which the aforementioned Index file can be immediately found by narrowing the search scope, requesting /my_context/ displays the content of the jsps declared in @Result section of the Index class. However, as our action classes are distributed throughout the com.example.[a-z].action packages, where [a-z] represents the large volume of directories we have in our package structure, we cannot use this trick as a workaround. We have also tried placing index.jsp at the top level of the class path, and have the index.jsp redirect to /my_context/index, which worked but not what we want. Could this be a bug? We appreciate your responses. Thank you in advance. EDIT: JIRA registered, problem solved (from Struts 2.3.12 up)

    Read the article

  • C++ - Distributing different headers than development

    - by Ben
    I was curious about doing this in C++: Lets say I have a small library that I distribute to my users. I give my clients both the binary and the associated header files that they need. For example, lets assume the following header is used in development: #include <string> ClassA { public: bool setString(const std::string & str); private: std::string str; }; Now for my question. For deployment, is there anything fundamentally wrong with me giving a 'reduced' header to my clients? For example, could I strip off the private section and simply give them this: #include <string> ClassA { public: bool setString(const std::string & str); }; My gut instinct says "yes, this is possible, but there are gotchas", so that is why I am asking this question here. If this is possible and also safe, it looks like a great way to hide private variables, and thus even avoid forward declaration in some cases. I am aware that the symbols will still be there in the binary itself, and that this is just a visibility thing at the source code level. Thanks!

    Read the article

  • What does an object look like in memory?

    - by NeilMonday
    This is probably a really dumb question, but I will ask anyway. I am curious what an object looks like in memory. Obviously it would have to have all of its member data in it. I assume that functions for an object would not be duplicated in memory (or maybe I am wrong?). It would seem wasteful to have 999 objects in memory all with the same function defined over and over. If there is only 1 function in memory for all 999 objects, then how does each function know who's member data to modify (I specifically want to know at the low level). Is there an object pointer that gets sent to the function behind the scenes? Perhaps it is different for every compiler? Also, how does the static keyword affect this? With static member data, I would think that all 999 objects would use the exact same memory location for their static member data. Where does this get stored? Static functions I guess would also just be one place in memory, and would not have to interact with instantiated objects, which I think I understand.

    Read the article

  • IntelliJ inspection -- non-thrown exception

    - by skiaddict1
    This is a follow-up question to 1832203. I'm making it a new question as well, because it seems that posting an answer to a question doesn't change its position on the java page and so I'm worried that it won't get seen. Apologies if I've just stepped on some etiquette toes. I'm an IntelliJ newbie -- started using it two days ago and I'm absolutely head-over-heels in love! One of the things I adore is the code inspections. However... In one of my classes I often create exceptions without throwing them. If I can't turn off (or downgrade) the inspection warning for this then I can see I'm going to end up ignoring inspections on at least that file (if not the entire project), which would be a real pity. I've done a search in the inspection settings for "exception", and found nothing that relates exactly, so I turned them all off just to see, and it's still doing it (even after a rebuild...BTW when are inspections redone? at save? at rebuild? ???), so I would really like some help on how to make this one into an info/typo level -- which I can then ignore. Using the free version, if that makes any difference TIA to all those experienced IntelliJ warriors out there!

    Read the article

  • How to perform a Depth First Search iteratively using async/parallel processing?

    - by Prabhu
    Here is a method that does a DFS search and returns a list of all items given a top level item id. How could I modify this to take advantage of parallel processing? Currently, the call to get the sub items is made one by one for each item in the stack. It would be nice if I could get the sub items for multiple items in the stack at the same time, and populate my return list faster. How could I do this (either using async/await or TPL, or anything else) in a thread safe manner? private async Task<IList<Item>> GetItemsAsync(string topItemId) { var items = new List<Item>(); var topItem = await GetItemAsync(topItemId); Stack<Item> stack = new Stack<Item>(); stack.Push(topItem); while (stack.Count > 0) { var item = stack.Pop(); items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } } return items; } EDIT: I was thinking of something along these lines, but it's not coming together: var tasks = stack.Select(async item => { items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } }).ToList(); if (tasks.Any()) await Task.WhenAll(tasks); UPDATE: If I wanted to chunk the tasks, would something like this work? foreach (var batch in items.BatchesOf(100)) { var tasks = batch.Select(async item => { await DoSomething(item); }).ToList(); if (tasks.Any()) { await Task.WhenAll(tasks); } } The language I'm using is C#.

    Read the article

  • Django + GAE (Google App Engine) : most convenient path for a beginner?

    - by mac
    Some background info first: Goal: a medium-level complexity web app that I will need to maintain and possibly extend for a few years. Experience: good knowledge of python, some experience of MVC frameworks (in PHP). Desiderata: using django and google app engine. I read extensively about the compatibility issues between GAE and Django, and I am aware of the GAE patch, the norel project, and other similar pieces of code. I have also understood that the SDK provides some of the features of django "out of the box". Yet, given that I have no previous experience with neither Django nor GAE, I am unable to evaluate to which extent using a patched version of Django will strip away important features, or how far the framework provided in the SDK is compatible with Django. So I am rather confused on what would be the best way to proceed in my situation: Should I simply use a patched version of Django as the differences with the original Django are so minor that I would hardly notice them? Should I write my app completely in "regular django" and try to port it to GAE only afterwards, when I will have got a grasp on Django internals and philosophy? Should I write my app using the framework provided with the SDK and port it to django only afterwards? Should I... ? Thank you in advance for your time and advice.

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • C# struct with an array

    - by Whitey
    I am making a game using C# with the XNA framework. The player is a 2D soldier on screen and the user is able to fire bullets. The bullets are stored in an array. I have looked into using Lists and arrays for this and I came to the conclusion that an array is a lot better for me, as there will be a lot of bullets firing and being destroyed at once, something that I read Lists don't handle so well. After reading through some posts on the XNA forums, this came to my attention: http://forums.xna.com/forums/p/16037/84353.aspx I have created a struct like so: // Bullets struct Bullet { Vector2 Position; Vector2 Velocity; float Rotation; Rectangle BoundingRect; bool Active; } And I made the array like this: Bullet[] bulletCollection = new Bullet[100]; But when I try to do some code like this: // Fire bullet if (mouseState.LeftButton == ButtonState.Pressed) { for (int i = 0; i < bulletCollection.Length; i++) { if (!bulletCollection[i].Active) { // something } } I get the following error: 'Zombie_Apocalypse.Game1.Bullet.Active' is inaccessible due to its protection level Can anyone lend a hand? I have no idea why this error is popping up, or even if I'm declaring the array properly or anything... as the post on the XNA forums doesn't go into detail about that. Thank you for any help you can provide. :)

    Read the article

  • Android SDK: Hello World won't run !

    - by RedRoses
    Hello, This is a very very beginner question regarding Android Development I am trying to create and run the Hello world example from the Android SDK website, but I can't see anything appearing on the screen. It appears to me that Eclipse just hangs at this point: [2010-11-05 09:55:47 - HelloAndroid] ------------------------------ [2010-11-05 09:55:47 - HelloAndroid] Android Launch! [2010-11-05 09:55:47 - HelloAndroid] adb is running normally. [2010-11-05 09:55:47 - HelloAndroid] Performing com.example.helloandroid.HelloAndroid activity launch [2010-11-05 09:55:48 - HelloAndroid] Automatic Target Mode: launching new emulator with compatible AVD 'AVD2' [2010-11-05 09:55:48 - HelloAndroid] Launching a new emulator with Virtual Device 'AVD2' [2010-11-05 09:55:51 - HelloAndroid] New emulator found: emulator-5554 [2010-11-05 09:55:51 - HelloAndroid] Waiting for HOME ('android.process.acore') to be launched... [2010-11-05 09:57:10 - HelloAndroid] WARNING: Application does not specify an API level requirement! [2010-11-05 09:57:10 - HelloAndroid] Device API version is 8 (Android 2.2) [2010-11-05 09:57:10 - HelloAndroid] HOME is up on device 'emulator-5554' [2010-11-05 09:57:10 - HelloAndroid] Uploading HelloAndroid.apk onto device 'emulator-5554' [2010-11-05 09:57:12 - HelloAndroid] Installing HelloAndroid.apk... [2010-11-05 09:59:33 - HelloAndroid] Success! [2010-11-05 09:59:33 - HelloAndroid] Starting activity com.example.helloandroid.HelloAndroid on device emulator-5554 So it says "Starting activity ..." but nothing ever starts and it's already been more than 30 mins. What could be wrong?? Thanks !

    Read the article

  • Applying powershell outside IT Management.

    - by Tormod
    Hi. We have a flexible process control system by which automation engineers configure up large application comprising thousands of small logical units that are parameterized and integrated into the control flow. There are many tasks that are repetitive on the granular level, and there are a multitude of proprietary productivity tools that have been made to meet this demand. We have different business segments, and the automation engineers vary across the board in skill sets and interests. Fancy GUI and usability versus flexibility is a common discussion. At first glance, powershell seems to be a sensible platform to implement such tooling and which also would be a advantageous cross-over skill to manage the IT aspects of the system setup and deployment as a whole. This should allow the script savvy their desired flexibility (they are already a scripting crowd) and the GUI dependant could still get their desired GUI underpinned by powershell. But I can't seem to find many people/groups who have tried to use the scriptability and object passing of powershell extensively to accommodate a heterogeneous user community outside the realm of IT management. Do anybody have any tips or word of caution? Am I missing something obvious as to why this shouldn't be done? Shouldn't powershell be taking over the world? ;-)

    Read the article

  • How can I make "month" columns in Sql?

    - by Beska
    I've got a set of data that looks something like this (VERY simplified): productId Qty dateOrdered --------- --- ----------- 1 2 10/10/2008 1 1 11/10/2008 1 2 10/10/2009 2 3 10/12/2009 1 1 10/15/2009 2 2 11/15/2009 Out of this, we're trying to create a query to get something like: productId Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec --------- ---- --- --- --- --- --- --- --- --- --- --- --- --- 1 2008 0 0 0 0 0 0 0 0 0 2 1 0 1 2009 0 0 0 0 0 0 0 0 0 3 0 0 2 2009 0 0 0 0 0 0 0 0 0 3 2 0 The way I'm doing this now, I'm doing 12 selects, one for each month, and putting those in temp tables. I then do a giant join. Everything works, but this guy is dog slow. I know this isn't much to go on, but knowing that I barely qualify as a tyro in the db world, I'm wondering if there is a better high level approach to this that I might try. (I'm guessing there is.) (I'm using MS Sql Server, so answers that are specific to that DB are fine.) (I'm just starting to look at "PIVOT" as a possible help, but I don't know anything about it yet, so if someone wants to comment about that, that might be helpful as well.)

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >