Search Results

Search found 3489 results on 140 pages for 'summary'.

Page 112/140 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • auto-document exceptions on methods in C#/.NET

    - by Sarah Vessels
    I would like some tool, preferably one that plugs into VS 2008/2010, that will go through my methods and add XML comments about the possible exceptions they can throw. I don't want the <summary> or other XML tags to be generated for me because I'll fill those out myself, but it would be nice if even on private/protected methods I could see which exceptions could be thrown. Otherwise I find myself going through the methods and hovering on all the method calls within them to see the list of exceptions, then updating that method's <exception list to include those. Maybe a VS macro could do this? From this: private static string getConfigFilePath() { return Path.Combine(Environment.CurrentDirectory, CONFIG_FILE); } To this: /// <exception cref="System.ArgumentException"/> /// <exception cref="System.ArgumentNullException"/> /// <exception cref="System.IO.IOException"/> /// <exception cref="System.IO.DirectoryNotFoundException"/> /// <exception cref="System.Security.SecurityException"/> private static string getConfigFilePath() { return Path.Combine(Environment.CurrentDirectory, CONFIG_FILE); } Update: it seems like the tool would have to go through the methods recursively, e.g., method1 calls method2 which calls method3 which is documented as throwing NullReferenceException, so both method2 and method1 are documented by the tool as also throwing NullReferenceException. The tool would also need to eliminate duplicates, like if two calls within a method are documented as throwing DirectoryNotFoundException, the method would only list <exception cref="System.IO.DirectoryNotFoundException"/> once.

    Read the article

  • own drawImage / drawLine in OpenGL

    - by Chrise
    I'm implementing some native 2D-draw functions in my graphics engine for android, but now there's another question coming up, when I observe the performance of my program. At the moment I'm implementing a drawLine/drawImage function. In summary, there are following different values for drawing each different line / image: the color the alpha value the width of the line rotation (only for images) size/scale (also for images) blending method (subrtract, add, normal-alpha) Now, when an imageLine is drawn, I put the CPU-calculated vertex-positions and uv-values for 6 vertices (2 triangles), into a Floatbuffer and draw it immediately with drawArrays, after passing information for drawing (color,alpha, etc.) via uniforms to the shader. When I draw an image, the pre-set VBO is directly drawn after passing information. The first fact I recognized, is: of course drawing Images is much faster, than imagelines (beacuse of VBOs), but also: I cannot pre-put vertex-data into a VBO for imageLines, because imageLines have no static shape like normal images (varying linelength, varying linewidth and the vertex positions of x1,y1 and x2,y2 change too often) That's why I use a normal Floatbuffer, instead of a VBO. So my question is: What's the best way for managing images, and other 2D-graphics functions. For me it's some kind of important, that the user of the engine is able to draw as many images/2D graphics as possible, without loosing to much performance. You can find the functions for drawing images, imagelines, rects, quads, etc. here: https://github.com/Chrise55/LLama3D/blob/master/Llama3DLibrary/src/com/llama3d/object/graphics/image/ImageBase.java Here an example how it looks with many images (testing artificial neural networks), it works fine, but already little bit slow with that many images... :(

    Read the article

  • In Excel 2010, how can I show a count of occurrences on a specific date within multiple time ranges?

    - by Justin
    Here's what I'm trying to do. I have three columns of data. ID, Date(MM/DD/YY), Time(00:00). I need to create a chart or table that shows the number of occurrences on, say, 12/10/2010 between 00:00 and 00:59, 1:00 and 1:59, etc, for each hour of the day. I can do countif and get results for the date, but I cannot figure out how to show a summary of the count of occurrences per hour for the 24 hour period. I have months of data and many times each day. Example of data set is below. Any help is greatly ID Date Time 221 12/10/2010 00:01 223 12/10/2010 00:45 227 12/10/2010 01:13 334 12/11/2010 14:45 I would like the results to read: Date Time Count 12/10/2010 00:00AM - 00:59AM 2 12/10/2010 01:00AM - 01:59AM 1 12/10/2010 02:00AM - 02:59AM 0 ......(continues for every hour of the day) 12/11/2010 00:00AM - 00:59AM 0 ......... 12/11/2010 14:00PM - 14:59PM 1 And so on. Sorry for the length but I wanted to be clear. EDIT Here is a sample spreadsheet. Very little data, but I couldn't figure out a better way without having a huge file. Tested in notepad for formatting and worked ok on import as csv. PID,Date,Time 2888759,12/10/2010,0:10 2888760,12/10/2010,0:10 2888761,12/10/2010,0:10 2888762,12/10/2010,0:11 2889078,12/10/2010,15:45 2889079,12/10/2010,15:57 2889080,12/10/2010,15:57 2889081,12/10/2010,15:58 2889082,12/10/2010,16:10 2889083,12/10/2010,16:11 2889084,12/10/2010,16:11 2889085,12/10/2010,16:12 2889086,12/10/2010,16:12 2889087,12/10/2010,16:12 2889088,12/10/2010,16:13 2891529,12/14/2010,16:21

    Read the article

  • CodeIgniter -- unable to use an object

    - by Smandoli
    THE SUMMARY: When I call .../index.php/product, I receive: Fatal error: Call to a member function get_prod_single() on a non-object in /var/www/sparts/main/controllers/product.php on line 16 The offending Line 16 is: $data['pn_oem'] = $this->product_model->get_prod_single($product_id); Looks like I don't know how to make this a working object. Can you help me? THE CODE: In my /Models folder I have product_model.php: <?php class Product_model extends Model { function Product_model() { parent::Model(); } function get_prod_single($product_id) { //This will be a DB lookup ... return 'foo'; //stub to get going } } ?> In my /controllers folder I have product.php: <?php class Product extends Controller { function Product() { parent::Controller(); } function index() { $this->load->model('Product_model'); $product_id = 113; // will get this dynamically $data['product_id'] = $product_id; $data['pn_oem'] = $this->product_model->get_prod_single($product_id); $this->load->view('prod_single', $data); } } ?>

    Read the article

  • Whether to put method code in a VB.Net data storage class, or put it in a separate class?

    - by Alan K
    TLDR summary: (a) Should I include (lengthy) method code in classes which may spawn multiple objects at runtime, (b) does doing so cause memory usage bloat, (c) if so should I "outsource" the code to a class that is loaded only once and have the class methods call that, or alternatively (d) does the code get loaded only once with the object definition anyway and I'm worrying about nothing? ........ I don't know whether there's a good answer to this but if there is I haven't found it yet by searching in the usual places. In my VB.Net (2010 if it matters) WinForms project I have about a dozen or so class objects in an object model. Some of these are pretty simple and do little more than act as data storage repositories. The ones further up the object model, however, have an increasing number of methods. There can be a significant number of higher level objects in use though the exact number will be runtime dependent so I can't be more precise than that. As I was writing the method code for one of the top level ones I noticed that it was starting to get quite lengthy. Memory optimisation is something of a lost art given how much memory the average PC has these days but I don't want to make my application a resource hog. So my questions for anyone who knows .Net way better than I do (of which there will be many) are: Is the code loaded into memory with each instance of the class that's created? Alternatively is it loaded only once with the definition of the class, and all derived objects just refer to that definition? (I'm not really sure how that could be possible given that, for example, event handlers can be assigned dynamically, but no harm asking.) If the answer to the first one is yes, would it be more efficient to write the code in a "utility" object which is loaded only once and called from the real class' methods? Any thoughts appreciated.

    Read the article

  • Aliasing `T*` with `char*` is allowed. Is it also allowed the other way around?

    - by StackedCrooked
    Note: This question has been renamed and reduced to make it more focused and readable. Most of the comments refer to the old text. According to the standard objects of different type may not share the same memory location. So this would not be legal: int i = 0; short * s = reinterpret_cast<short*>(&i); // BAD! The standard however allows an exception to this rule: any object may be accessed through a pointer to char or unsigned char: int i = 0; char * c = reinterpret_cast<char*>(&i); // OK However, it is not clear to me if this is also allowed the other way around. For example: char * c = read_socket(...); unsigned * u = reinterpret_cast<unsigned*>(c); // huh? Summary of the answers The answer is NO for two reasons: You an only access an existing object as char*. There is no object in my sample code, only a byte buffer. The pointer address may not have the right alignment for the target object. In that case dereferencing it would result in undefined behavior. On the Intel and AMD platforms it will result performance overhead. On ARM it will trigger a CPU trap and your program will be terminated! This is a simplified explanation. For more detailed information see answers by @Luc Danton, @Cheers and hth. - Alf and @David Rodríguez.

    Read the article

  • Can I create a Google calendar for a user in a hosted domain using the admin credentials

    - by user351013
    I use the admin credentials for all of my interactions with the google api and I can retrieve\create\update\delete events from and for all of my hosted domain users. However, when I go to create a calendar for a hosted domain user, the calendar is created in the admins space. In the example below the GoogleUserName does NOT match the GoogleAccount. The postUri would look similar to : http://www.google.com/calendar/feeds/[email protected]/owncalendars/full and the GoogleUserName is [email protected]. The api creates a calendar but it is in the admins space. CalendarService service = new CalendarService("Test"); service.setUserCredentials(GoogleUserName, GooglePassword); CalendarEntry calendar = new CalendarEntry(); calendar.TimeZone = "America/Chicago"; calendar.Title.Text = Title; calendar.Summary.Text = Description; calendar.Color = Color; calendar.Selected = true; calendar.Hidden = false; Uri postUri = new Uri(String.Format("http://www.google.com/calendar/feeds/{0}/owncalendars/full", GoogleAccount)); CalendarEntry createdCalendar = (CalendarEntry)service.Insert(postUri, calendar); The documentation does specify to use the users credentials however the documentation is not specific to hosted domains a great deal of the time and as such I am always attempting trial and error when trying interactions. That I can use all of the CRUD on the user's events themselves using the admin credentials leaves me to believe that it might be possible.

    Read the article

  • Prevent coersion to a single type in unlist() or c(); passing arguments to wrapper functions

    - by Leo Alekseyev
    Is there a simple way to flatten a list while retaining the original types of list constituents?.. Is there a way to programmatically construct a heterogeneous list?.. For instance, I want to create a simple wrapper for functions like png(filename,width,height) that would take device name, file name, and a list of options. The naive approach would be something like my.wrapper <- function(dev,name,opts) { do.call(dev,c(filename=name,opts)) } or similar code with unlist(list(...)). This doesn't work because opts gets coerced to character, and the resulting call is e.g. png(filename,width="500",height="500"). If there's no straightforward way to create heterogeneous lists like that, is there a standard idiomatic way to splice arguments into functions without naming them explicitly (e.g. do.call(dev,list(filename=name,width=opts["width"]))? -- Edit -- Gavin Simpson answered both questions below in his discussion about constructing wrapper functions. Let me give a summary of the answer to the title question: It is possible to construct a heterogeneous list with c() provided the arguments to c() are lists. To wit: > foo <- c("a","b"); bar <- 1:3 > c(foo,bar) [1] "a" "b" "1" "2" "3" > c(list(foo),list(bar)) [[1]] [1] "a" "b" [[2]] [1] 1 2 3 > c(as.list(foo),as.list(bar)) ## this creates a flattened heterogeneous list [[1]] [1] "a" [[2]] [1] "b" [[3]] [1] 1 [[4]] [1] 2 [[5]] [1] 3

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • How to get the parameter names of an object's constructors (reflection)?

    - by Tom
    Say I somehow got an object reference from an other class: Object myObj = anObject; Now I can get the class of this object: Class objClass = myObj.getClass(); Now, I can get all constructors of this class: Constructor[] constructors = objClass.getConstructors(); Now, I can loop every constructor: if (constructors.length > 0) { for (int i = 0; i < constructors.length; i++) { System.out.println(constructors[i]); } } This is already giving me a good summary of the constructor, for example a constructor public Test(String paramName) is shown as public Test(java.lang.String) Instead of giving me the class type however, I want to get the name of the parameter.. in this case "paramName". How would I do that? I tried the following without success: if (constructors.length > 0) { for (int iCon = 0; iCon < constructors.length; iCon++) { Class[] params = constructors[iCon].getParameterTypes(); if (params.length > 0) { for (int iPar = 0; iPar < params.length; iPar++) { Field fields[] = params[iPar].getDeclaredFields(); for (int iFields = 0; iFields < fields.length; iFields++) { String fieldName = fields[i].getName(); System.out.println(fieldName); } } } } } Unfortunately, this is not giving me the expected result. Could anyone tell me how I should do this or what I am doing wrong? Thanks!

    Read the article

  • Web UI element to represent two different micro-views of data in the same spot?

    - by Chris McCall
    I've been tasked with laying out a portion of a screen for a customer care (call center) app that serves as sort of a header/summary block at the top of the screen. Here's what it looks like: The important part is in the red box. That little tooltip is the biz's vision for how to represent both the numeric SiteId and the textual Site Name all in the same piece of screen real estate. I asked, and the business thinks the Name is more important than the ID, but lists the Id by default, because the Name can't be truncated in the display, and there's only so much horizontal room to put the data. So they go with the Id, because it's fewer characters, and then they have the user mouse-over the Id to display the name (presumably because the tooltip can be of unlimited width and since it's floating over the rest of the screen, the full name will always be displayed. So, here's my question: Is there some better UI metaphor that I don't know about that could get this job done, while meeting the following constraints?: Does not require the mouse (uses a keyboard shortcut to do the "reveal") Allows the user to copy and paste the name Will not truncate the name Provides for the display of both the ID and name in the same spot Works with IE7

    Read the article

  • SQL Server insert performance with and without primary key

    - by Eric
    Summary: I have a table populated via the following: insert into the_table (...) select ... from some_other_table Running the above query with no primary key on the_table is ~15x faster than running it with a primary key, and I don't understand why. The details: I think this is best explained through code examples. I have a table: create table the_table ( a int not null, b smallint not null, c tinyint not null ); If I add a primary key, this insert query is terribly slow: alter table the_table add constraint PK_the_table primary key(a, b); -- Inserting ~880,000 rows insert into the_table (a,b,c) select a,b,c from some_view; Without the primary key, the same insert query is about 15x faster. However, after populating the_table without a primary key, I can add the primary key constraint and that only takes a few seconds. This one really makes no sense to me. More info: The estimated execution plan shows 0% total query time spent on the clustered index insert SQL Server 2008 R2 Developer edition, 10.50.1600 Any ideas?

    Read the article

  • Google Sites API - File Cabinets: Spaces and extension separator (.) are removed from file names

    - by user1299447
    We have a series of internal reports that we update regularly from our internal databases. We built an application in C# that uploads these reports to a Google Site. Everything works fine, except that the name of the file shown to the final user in the File Cabinet does not include the original spaces nor the extension separator (.) For example, Stock per warehouse.pdf is shown as : Stockperwarehousepdf Below is a simplified version of the code. private AtomEntry UploadAttachment(string filename, AtomEntry parent, string title, string description) { SiteEntry entry = new SiteEntry(); AtomCategory category = new AtomCategory(SitesService.ATTACHMENT_TERM, SitesService.KIND_SCHEME); category.Label = "attachment"; entry.Categories.Add(category); AtomLink parentLink = new AtomLink(AtomLink.ATOM_TYPE, SitesService.PARENT_REL); parentLink.HRef = parent.SelfUri; entry.Links.Add(parentLink); entry.MediaSource = new MediaFileSource(filename, MediaFileSource.GetContentTypeForFileName(filename)); entry.Content.Type = MediaFileSource.GetContentTypeForFileName(filename); entry.Title.Text= title; entry.Summary.Text = description; AtomEntry newEntry = null; newEntry = service.Insert(new Uri(makeFeedUri("content")), entry); } The key line is where the MediaFileSource object is created. Any idea of what we are missing? I've tried all sort of changes :(

    Read the article

  • subset complete or balance dataset in r

    - by SHRram
    I have a dataset that unequal number of repetition. I want to subset a data by removing those entries that are incomplete (i.e. replication less than maximum). Just small example: set.seed(123) mydt <- data.frame (name= rep ( c("A", "B", "C", "D", "E"), c(1,2,4,4, 3)), var1 = rnorm (14, 3,1), var2 = rnorm (14, 4,1)) mydt name var1 var2 1 A 2.439524 3.444159 2 B 2.769823 5.786913 3 B 4.558708 4.497850 4 C 3.070508 2.033383 5 C 3.129288 4.701356 6 C 4.715065 3.527209 7 C 3.460916 2.932176 8 D 1.734939 3.782025 9 D 2.313147 2.973996 10 D 2.554338 3.271109 11 D 4.224082 3.374961 12 E 3.359814 2.313307 13 E 3.400771 4.837787 14 E 3.110683 4.153373 summary(mydt) name var1 var2 A:1 Min. :1.735 Min. :2.033 B:2 1st Qu.:2.608 1st Qu.:3.048 C:4 Median :3.120 Median :3.486 D:4 Mean :3.203 Mean :3.688 E:3 3rd Qu.:3.446 3rd Qu.:4.412 Max. :4.715 Max. :5.787 I want to get rid of A, B, E from the data as they are incomplete. Thus expected output: name var1 var2 4 C 3.070508 2.033383 5 C 3.129288 4.701356 6 C 4.715065 3.527209 7 C 3.460916 2.932176 8 D 1.734939 3.782025 9 D 2.313147 2.973996 10 D 2.554338 3.271109 11 D 4.224082 3.374961 Please note the dataset is big, the following may not a option: mydt[mydt$name == "C",] mydt[mydt$name == "D", ]

    Read the article

  • Change cookies when doing jQuery.ajax requests in Chrome Extensions

    - by haskellguy
    I have wrote a plugin for facebook that sends data to testing-fb.local. The request goes through if the user is logged in. Here is the workflow: User logs in from testing-fb.local Cookies are stored When $.ajax() are fired from the Chrome extension Chrome extension listen with chrome.webRequest.onBeforeSendHeaders Chrome extension checks for cookies from chrome.cookies.get Chrome changes the Set-Cookies header to be sent And the request goes through. I wrote this part of code that shoud be this: function getCookies (callback) { chrome.cookies.get({url:"https://testing-fb.local", name: "connect.sid"}, function(a){ return callback(a) }) } chrome.webRequest.onBeforeSendHeaders.addListener( function(details) { getCookies(function(a){ // Here something happens }) }, {urls: ["https://testing-fb.local/*"]}, ['blocking']); Here is my manifest.json: { "name": "test-fb", "version": "1.0", "manifest_version": 1, "description": "testing", "permissions": [ "cookies", "webRequest", "tabs", "http://*/*", "https://*/*" ], "background": { "scripts": ["background.js"] }, "content_scripts": [ { "matches": ["http://*.facebook.com/*", "https://*.facebook.com/*"], "exclude_matches" : [ "*://*.facebook.com/ajax/*", "*://*.channel.facebook.tld/*", "*://*.facebook.tld/pagelet/generic.php/pagelet/home/morestories.php*", "*://*.facebook.tld/ai.php*" ], "js": ["jquery-1.8.3.min.js", "allthefunctions.js"] } ] } In allthefunction.js I have the $.ajax calls, and in background.js is where I put the code above which however looks not to run.. In summary, I have not clear: What I should write in Here something happens If this strategy is going to work Where should I put this code?

    Read the article

  • Unnecessary Redundancy with Tables.

    - by Stacey
    My items are listed as follows; This is just a summary of course. But I'm using a method shown for the "Detail" table to represent a type of 'inheritence', so to speak - since "Item" and "Downloadable" are going to be identical except that each will have a few additional fields relevant only to them. My question is in this design pattern. This sort of thing appears many, many times in our projects - is there a more intelligent way to handle it? I basically need to normalize the tables as much as possible. I'm extremely new to databases and so this is all very confusing to me. There are 5 items. Awards, Items, Purchases, Tokens, and Downloads. They are all very, very similar, except each has a few pieces of data relevant only to itself. I've tried to use a declaration field (like an enumerator 'Type' field) in conjunction with nullable columns, but I was told that is a bad approach. What I have done is take everything similar and place it in a single table, and then each type has its own table that references a column in the 'base' table. The problem occurs with relationships, or junctions. Linking all of these back to a customer. Each type takes around 2 additional tables to properly junction all of the data together- and as such, my database is growing very, very large. Is there a smarter practice for this kind of behavior? Item ID | GUID Name | varchar(64) Product ID | GUID Name | varchar(64) Store | GUID [ FK ] Details | GUID [FK] Downloadable ID | GUID Name | varchar(64) Url | nvarchar(2048) Details | GUID [FK] Details ID | GUID Price | decimal Description | text Peripherals [ JUNCTION ] ID | GUID Detail | GUID [FK] Store ID | GUID Addresses | GUID Addresses ID | GUID Name | nvarchar(64) State | int [FK] ZipCode | int Address | nvarchar(64) State ID | int Name | varchar(32)

    Read the article

  • Using ddply() to Get Frequency of Certain IDs, by Appearance in Multiple Rows (in R)

    - by EconomiCurtis
    Goal If the following description is hard follow, please see the example "before" and "after" to see a straightforward example. I have bartering data, with unique trade ids, and two sides of the trade. Side1 and Side2 are baskets, lists of item ids that represent both sides of the barter transaction. I'd like to count the frequency each ITEM appears in TRADES. E.g, if item "001" appeared in 3 trades, I'd have a count of 3 (ignoring how many times the item appeared in each trade). Further, I'd like to do this with the plyr ddply function. (If you're interested as to my motivation, I working over many hundreds of thousands of transactions and am already using a ddply to calculate several other summary statistics. I'd like to add this to the ddply I'm already using, rather than calculate it after, and merge it into the ddply output.... sorry if that was difficult to follow.) In terms of pseudo code I'm working off of: merge each row of Side1 and Side2 by row, get unique() appearances of each item id apply table() function transpose and relabel output from table Example of the structure of my data, and the output I desire. Data Example (before): df <- data.frame(TradeID = c("01","02","03","04")) df$Side1 = list(c("001","001","002"), c("002","002","003"), c("001","004"), c("001","002","003","004")) df$Side2 = list(c("001"),c("007"),c("009"),c()) Desired Output (after): df.ItemRelFreq_byTradeID <- data.frame(ItemID = c("001","002","003","004","007","009"), RelFreq_byTrade = c(3,3,2,2,1,1)) One method to do this without ddply I've worked out one way to do this below. My problem is that I can't quite seem to get ddply to do this for me. temp <- table(unlist(sapply(mapply(c,df$Side1,df$Side2), unique))) df.ItemRelFreq_byTradeID <- data.frame(ItemID = names(temp), RelFreq_byTrade = temp[]) Thanks for any help you can offer! Curtis

    Read the article

  • OpenGL multiple threads, variable handling [closed]

    - by toeplitz
    I have written an OpenGL program which runs in the following way: Main: - Initialize SDL - Create thread which has the OpenGL context: - Renderloop - Set camera (view) matrix with glUniform. - glDrawElements() .... etc. - Swapbuffers(); - Main SDL loop handling input events and such. - Update camera matrix of type glm::mat4. This is how I pass my camera object to the class that handles opengl. Camera *cam = new Camera(); gl.setCam(cam); where void setCam(Camera *camera) { this->camera = camera; } For rendering in the opengl context thread, this happens: glm::mat4 modelView = camera->view * model; glUniformMatrix4fv(shader->bindUniform("modelView"), 1, GL_FALSE, glm::value_ptr(modelView)); In the main program where my SDL and other things are handles I then recompute the view matrix. This his working fine without me using any mutex locks. Is this correct? On the other hand, I add objects to my scene by an "upload queue" and in this case I have to mutex lock my upload queue vector (vector class type) when adding items to it or else the program crashes. In summary: I recompute my matrix in a different thread and then use it in the opengl thread without any mutex lock. Why is this working? Edit: I think my question is similar to what was asked here: Should I lock a variable in one thread if I only need it's value in other threads, and why does it work if I don't?, only in my case it is even more simple with only one matrix being changed.

    Read the article

  • Drupal 7: How can I create a key/value field(or field group, if that's even possible)?

    - by Su'
    Let's say I'm creating some app documentation. In creating a content type for functions, I have a text field for name, a box for a general description, and a couple other basic things. Now I need something for storing arguments to the function. Ideally, I'd like to input these as key-value pairs, or just two related fields, which can then be repeated as many times as needed for the given function. But I can't find any way to accomplish this. The closest I've gotten is an abandonded field multigroup module that says to wait for CCK3, which hasn't even produced an alpha yet as far as I can tell and whose project page makes no obvious mention of this multi-group functionality. I also checked the CCK issue queue and don't think I saw it in there, either. Is there a current viable way of doing this I'm not seeing? Viable includes "you're thinking of this the wrong way and do X instead." I've considered using a "Long text and summary" field, but that smells hackish and I don't know if I'd be setting myself up for side-effects. I'm new to Drupal.

    Read the article

  • how to pass time Interval in list pref option

    - by user1748932
    <ListPreference android:entries="@array/listOptions2" android:entryValues="@array/listValues2" android:key="listprefrefresh" android:summary="set Refresh The Applciation" android:title="Set TIme Intervale" /> <item>10 </item> <item>30</item> </integer-array> <integer-array name="listValues2"> <item>10000</item> <item>30000</item> </integer-array> public static final String PREF_BEER_SIZE2 = "listprefrefresh"; Preference beerPref2 = (Preference) findPreference(PREF_BEER_SIZE2); beerPref2 .setOnPreferenceChangeListener(new Preference.OnPreferenceChangeListener() { public boolean onPreferenceChange(Preference preference, Object newValue) { // TODO Auto-generated method stub final ListPreference listrefresh = (ListPreference) preference; final int idx = listrefresh .findIndexOfValue((String) newValue); if (idx == 0 ) { handler.post(timedTask); // } else if (idx == 1) { // System.out.println("2"); } return true; } }); This is my code i want Pass Time how can i implement right now? I am passing Integr value.please tell me

    Read the article

  • Wrapping <%= f.check_box %> inside <label>

    - by Ben Scheirman
    I have a list of checkboxes on a form. Due to the way the CSS is structured, the label element is styled directly. This requires me to nest the checkbox inside of the tag. This works in raw HTML, if you click on the label text, the state of the checkbox changes. It doesn't work with the rails <%= f.check_box %> helper, however, because it outputs a hidden input tag first. In summary, <label> <%= f.check_box :foo %> Foo </label> this is the output I want: <label> <input type="checkbox" ... /> <input type="hidden" ... /> Foo </label> ...but this is what rails is giving me: <label> <input type="hidden" ... /> <input type="checkbox" ... /> Foo </label> So the label behavior doesn't actually work :(. Is there any way to get around this?

    Read the article

  • spaces or %20 in links turn into + signs when page is sent as an email

    - by Obay
    I am creating a web app that accepts input of news items (title, article, url). It has a page news.php which creates a summary of all news items inputted for specified dates, like so: News 4/25/2010 Title 1 [URL 1] Article 1 Title 2 [URL 2] Article 2 and so on... I have two other pages, namely preview.php and send.php , both of which call news.php through a file_get_contents() call. Everything works fine except when the URL contains spaces. During Preview, the urls get opened (FF: spaces are spaces, Chrome: spaces are %20). However, during Send, when received as emails, the urls don't get opened, because the spaces are converted into + signs. For example: 1. Preview in FF: http://www.example.com/this is the link.html 2. Preview in Chrome: http://www.example.com/this%20is%20the%20link.html 3. Viewed as email in both browsers: http://www.example.com/this+is+the+link.html Only #3 doesn't work (link doesn't get opened). Why are the spaces in the urls correct (spaces or %20) when previewed, but incorrect (+) when received in the emails, when in fact, the same page is generated by the same news.php? Any help appreciated :)

    Read the article

  • Random servers in Citrix servers suddenly bluescreens (mostly 0x0000008e and 0x0000007e)

    - by Rasmus Rask
    I'm responsible for a Citrix Presentation Server 4.5 farm. Starting Friday 30. November, my servers started to crash randomly. So far we've experienced 80 crashes, so it's obviously becoming an increasingly big problem for us. I have 12+ years experience with IT, so I know the difference between 0 and 1, but I have a hard time cracking this. We've rolled back any recent changes I can think of for different groups of servers, but all groups still seem to crash. I don't have the skills to interpret the memory dumps to find the culprit. Has anyone encountered the same or a similar problem? - might be a generic Windows issue Other than executing "analyze -v" in WinDbg, how do I work my way through the memory dumps to see what actually triggered the BSOD? Any suggested steps in getting to the bottom of this? Any help is greatly appreciated. I can also provide links to kernel memory dumps or WinDbg output if necessary. Thanks! Problem description The majority of the STOP errors we encounter are: 0x0000008e KERNEL_MODE_EXCEPTION_NOT_HANDLED (50%) 0x0000007e SYSTEM_THREAD_EXCEPTION_NOT_HANDLED (26%) 0x00000050 PAGE_FAULT_IN_NONPAGED_AREA (21%) We also see a few 0x0000000a IRQL_NOT_LESS_OR_EQUAL (3%). For both 0x0000008e and 0x0000007e bug checks, the exception code is 0xc0000005 (Access Violation). When opening dump files in WinDbg, most details are exactly the same, for all the 0x0000008e and 0x0000007e bug checks respectively: 0x0000008e Exception address: 0x808bc9e3 Trap frame: [varies] FAILURE_BUCKET_ID: 0x8E_nt!HvpGetCellMapped+97 Probably Caused by (IMAGE_NAME): ntkrpamp.exe 0x0000007e Exception address: 0x808369b6 Exception record address: 0xf70d3be0 Context record address: 0xf70d38dc FAILURE_BUCKET_ID: 0x7E_nt!MmPurgeSection+14 Probably Caused by: memory_corruption About 30% of the crashes happens between 17:00 and 19:00, which leads me to believe this tend to happen more often during logoffs. But then again, only ~15% occurs between 15:00 and 17:00. Summary of farm Citrix Presentation Server 4.5 R06 on Windows Server 2003 R2 SP2 All high priority patches, at least as of October installed Virtualized using VMWare ESX/vSphere 4.1 on HP Proliant BL460c G6 blade servers About 53 Presentation Servers in production, divided into three silos - only one of which, the largest, is affected 2 vCPU's (5 GHz reserved), 8 GB RAM (all reserved) for each Presentation Server Plenty of free disk space Very few printer drivers - automated deletion of non-approved drivers every night ~1.000 peak concurrent users, which is reached at around 10:30 (on weekdays) Number of sessions steadily decline between 15:00 and 19:00 to ~230

    Read the article

  • VPN IP Routing - slow connections

    - by dannymcc
    UPDATE: Router error logs show: LCP Time-out 0 I'm not sure how to correct this. The Lan-to-Lan profiles are set to -1 Idle Timeout (for the remote branch). I have a PPTP VPN running between two Draytek 2820 routers. They are setup that one dials out to the other one. Main Practice - 192.168.1.0/24 Branch - 192.168.3.0/24 I have then set (on the Branch) router the following route: 192.168.1.0/24 If I then request a server running on 192.168.1.1 from the Branch, it correctly routes through VPN tunnel. If I request the branch server at 192.168.3.1 it correctly routes to the local server without using the VPN tunnel. I have temporarily disabled the firewall on both routers, and made sure that QoS is disabled. The Main Practice internet connection is ~30mb down / ~10mb up, and the Branch connection is ~5mb down / ~2mb up. Anything over the VPN tunnel runs pretty slowly (VNC, Remote Desktop and Terminal Emulators). However, if I dial using the Windows VPN wizard, creating a connection from the laptop to the Main Practice - everything runs quickly. I'm looking for possible causes, and/or ways of further diagnosing the issue. Any help would be greatly appreciated! UPDATE: In summary, when I connect within the Branch and try and access a host that's within the Main Practice it works, but slowly. If I then dial the VPN on my Windows 7 laptop whilst still connected to the Branch network, it's fast. Main Practice Branch Practice Routing Table from Branch Router Key: C - connected, S - static, R - RIP, * - default, ~ - private * 0.0.0.0/ 0.0.0.0 via 126.256.126.103 WAN2 C~ 192.168.1.99/ 255.255.255.255 directly connected VPN-1 S~ 192.168.1.0/ 255.255.255.0 via 192.168.1.99 VPN-1 S~ 192.168.2.0/ 255.255.255.0 via 192.168.1.99 VPN-1 C~ 192.168.3.0/ 255.255.255.0 directly connected LAN2 C 126.256.126.103/ 255.255.255.224 directly connected WAN2 Routing Table from Main Practice Key: C - connected, S - static, R - RIP, * - default, ~ - private * 0.0.0.0/ 0.0.0.0 via 81.139.64.1, WAN2 S 81.137.176.1/ 255.255.255.255 via 81.137.176.1, WAN2 * 81.139.64.1/ 255.255.255.255 via 81.139.64.1, WAN2 C~ 192.168.1.204/ 255.255.255.255 is directly connected, VPN C~ 192.168.1.0/ 255.255.255.0 is directly connected, LAN S~ 192.168.2.0/ 255.255.255.0 via 192.168.1.204, VPN S~ 192.168.3.0/ 255.255.255.0 via 192.168.1.203, VPN Connection Details (from Branch Router) Connection Details (from Main Practice Router) IPERF.exe Output

    Read the article

  • ESXi 5.1 ghettoVCB stuck at Clone: 10% done

    - by stormdrain
    Trying to run ghettoVCB for the first time here. I am using a NAS that is set up as a datastore on the host. I did a dry run and it completed without error. The VM is ~500GB and there is only one on the host that I'm trying to backup. I proceeded to start the actual backup: ./ghettoVCB.sh -m vmname -g ghettoVCB.conf It goes though the config and looks like it's taking off: 2013-10-24 11:43:19 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ghettoVCB.conf 2013-10-24 11:43:19 -- info: CONFIG - VERSION = 2013_01_11_0 2013-10-24 11:43:19 -- info: CONFIG - GHETTOVCB_PID = 17398616 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas2tb-001/esxi4 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2013-10-24_11-43-18 2013-10-24 11:43:19 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2013-10-24 11:43:19 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2013-10-24 11:43:19 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4 2013-10-24 11:43:19 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2013-10-24 11:43:19 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2013-10-24 11:43:19 -- info: CONFIG - LOG_LEVEL = info 2013-10-24 11:43:19 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2013-10-24_11-43-18-17398616.log 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_COMPRESSION = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2013-10-24 11:43:19 -- info: CONFIG - ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP = 0 2013-10-24 11:43:19 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2013-10-24 11:43:19 -- info: CONFIG - VM_SHUTDOWN_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - VM_STARTUP_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - EMAIL_LOG = 0 2013-10-24 11:43:19 -- info: 2013-10-24 11:43:22 -- info: Initiate backup for vmname 2013-10-24 11:43:22 -- info: Creating Snapshot "ghettoVCB-snapshot-2013-10-24" for serv2 Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/esxi4-storage/vmname/vmname_1.vmdk'... Clone: 10% done. and it's been that way for over an hour now. Stuck at Clone: 10% done.. Thing is: I can see the vmdk on the NAS. And it looks like almost the whole thing is there. On the NAS it's showing ~430GB but on vSphere Client Summary is shows as 507GB. I don't see the vmdk on the NAS growing any more. The logfile mimics some of the above and is sitting at "Creating Snapshot..." and nothing else is coming in. Is the vmdk on the NAS showing all those GB because of the provisioning or something? i.e. is the size of the file not necessarily indicative of the amount of actual data that has been copied? Is there are reason it might be "Stuck" at 10%? i.e. could it really be taking this long? Any other tips? Thanks. Edit: as soon as I hit the Submit button, I glance over to see that it has incremented to 11% done. Good to know it'll be complete sometime around when the sun explodes.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >