Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 545/677 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • In languages which create a new scope each time in a loop block, a new local copy of the local loop

    - by Jian Lin
    It seems that in language like C, Java, and Ruby (as opposed to Javascript), a new scope is created for each iteration of a loop block, and the local variable defined for the loop is actually made into a local variable every single time and recorded in this new scope? For example, in Ruby: p RUBY_VERSION $foo = [] (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end the print out is: [MacBook01:~] $ ruby scope.rb "1.8.6" 1 2 3 4 5 [MacBook01:~] $ So, it looks like when a new scope is created, a new local copy of i is also created and recorded in this new scope, so that when the function is executed at a later time, the "i" is found in those scope chains as 1, 2, 3, 4, 5 respectively. Is this true? (It sounds like a heavy operation). Contrast that with p RUBY_VERSION $foo = [] i = 0 (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end This time, the i is defined before entering the loop, so Ruby 1.8.6 will not put this i in the new scope created for the loop block, and therefore when the i is looked up in the scope chain, it always refer to the i that was in the outside scope, and give 5 every time: [MacBook01:~] $ ruby scope2.rb "1.8.6" 5 5 5 5 5 [MacBook01:~] $ I heard that in Ruby 1.9, i will be treated as a local defined for the loop even when there is an i defined earlier? The operation of creating a new scope, creating a new local copy of i each time through the loop seems heavy, as it seems it wouldn't have matter if we are not invoking the functions at a later time. So when the functions don't need to be invoked at a later time, could the interpreter and the compiler to C / Java try to optimize it so that there is not local copy of i each time?

    Read the article

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • TypeError: Error #1009 - (Null reference error) With Flash.

    - by Wind Chimez
    I am not an expert in flash, but i do work with AS and tweak Flash projects , though not having deep expertise in it. Currently i need to revamp a flash website done by one another guy, and the code base given to me, upon execution is throwing the following error. "--- TypeError: Error #1009: Cannot access a property or method of a null object reference. at NewSite_fla::MainTimeline/__setProp_ContactOutP1_ContactOut_Contents_0() at NewSite_fla::MainTimeline/frame1() --" The structure of the project is like, it has the different sections split into different movie clips. There is no single main timeline, but click actions on different areas of seperate movie clips will take them between one another. All the AS logic of event handling are written inline in FLA , no seperate Document class exists. Preloader Movie clip is the first one getting loaded. As i understood the error is getting thrown initially itself, and it is not happening due to any Action script logic written inline, because it is throwing error even before hitting the first inline AS code. I am not able to figure Out what exactly it causing the problem, or where to resolve it. I really got stuck at this point. Any help will be great.I had not seen the particular solution i am looking for anywhere yet, though Error #1009 is common. I uploaded the fla, here ( http://tinyurl.com/249e95p ), for reference.It may not be working , since the included/refered video files and all are not there, but reviwing the action code/movie clips will be possible. Please let me know if somebody will be able to trace the issue exactly.

    Read the article

  • Defining an implementation independent version of the global object in JavaScript

    - by Aadit M Shah
    I'm trying to define the global object in JavaScript in a single line as follows: var global = this.global || this; The above statement is in the global scope. Hence in browsers the this pointer is an alias for the window object. Assuming that it's the first line of JavaScript to be executed in the context of the current web page, the value of global will always be the same as that of the this pointer or the window object. In CommonJS implementations, such as RingoJS and node.js the this pointer points to the current ModuleScope. However, we can access the global object through the property global defined on the ModuleScope. Hence we can access it via the this.global property. Hence this code snippet works in all browsers and in at least RingoJS and node.js, but I have not tested other CommomJS implementations. Thus I would like to know if this code will not yield correct results when run on any other CommonJS implementation, and if so how I may fix it. Eventually, I intend to use it in a lambda expression for my implementation independent JavaScript framework as follows (idea from jQuery): (function (global) { // javascript framework })(this.global || this);

    Read the article

  • Serializing a part of object graph

    - by Felix
    Hi all, I have a problem regarding Java custom serialization. I have a graph of objects and want to configure where to stop when I serialize a root object from client to server. Let's make it a bit concrete, clear by giving a sample scenario. I have Classes of type Company Employee (abstract) Manager extends Employee Secretary extends Employee Analyst extends Employee Project Here are the relations: Company(1)---(n)Employee Manager(1)---(n)Project Analyst(1)---(n)Project Imagine, I'm on the client side and I want to create a new company, assign it 10 employees (new or some existing) and send this new company to the server. What I expect in this scenario is to serialize the company and all bounding employees to the server side, because I'll save the relations on the database. So far no problem, since the default Java serialization mechanism serializes the whole object graph, excluding the field which are static or transient. My goal is about the following scenario. Imagine, I loaded a company and its 1000 employees from the server to the client side. Now I only want to rename the company's name (or some other field, that directly belongs to the company) and update this record. This time, I want to send only the company object to the server side and not the whole list of employees (I just update the name, the employees are in this use case irrelevant). My aim also includes the configurability of saying, transfer the company AND the employees but not the Project-Relations, you must stop there. Do you know any possibility of achieving this in a generic way, without implementing the writeObject, readObject for every single Entity-Object? What would be your suggestions? I would really appreciate your answers. I'm open to any ideas and am ready to answer your questions in case something is not clear.

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • Website --> Add Reference also creates files with pdb extensions

    - by SourceC
    Hello, Q1 Any assemblies stored in Bin directory will automatically be referenced by web application. We can add assembly reference via Website -- Add Reference or simply by copying dll into Bin folder. But I noticed that when we add reference via Website -- Add Reference, that additional files with .pdb extension are placed inside Bin. If these files are also needed, then why does reference still work even if we only place referenced dll into Bin, but not pdb files Q2 It appears that if you add a new item to web project, this class will automatically be added to project list and we can reference it from all pages in this project. So are all files added to project list automatically being referenced ? thanx EDIT: On your second question, you are adding a public class to a namespace so will be visible to other classes in that project and in that namespace. I don’t know much about assemblies, but I’d assume the reason why item( class ) added to the project is visible to other classes in that project is for the simple fact that in web project all classes get compiled into single assembly and for the fact that public classes contained in the same assembly are always visible to each other? much appreciated

    Read the article

  • PHP Json Encoding w/ quote escaping in 5.2?

    - by NickAldwin
    I'm playing with the flickr api and php. I want to pass some information from PHP to Javascript through Ajax. I have the following code: json_encode($pics); which results in the following example JSON string: [{"id":"4363603591","title":"blue, white and red...another seattle view","date_faved":"1266379499"},{"id":"4004908219","title":"\u201cI just told you my dreams and you made me see that I could walk into the sun and I could still be me and now I can't deny nothing lasts forever.\u201d","date_faved":"1259987670"}] Javascript has problems with this, however, due to the unescaped single-quote in the second item ("can't deny"). I want to use the function json_encode with the options parameter to make it strip the quotes, but that's only available in PHP 5.3, and I'm running 5.2 (not my server). Is there a fast way to run through the entire array and escape everything before encoding it in Json? I looked for a way to do this, but it all seems to deal with encoding it as the data is generated, something I cannot do as I'm not the one generating the data. If it helps, I'm currently using the following javascript after the ajax request: var photos = eval('(' + resptxt + ')');

    Read the article

  • Key Tips in WPF

    - by Brad Leach
    Office 2007 and the Ribbon introduced the concept of "Key Tips". In short, every single command in the Ribbon receives a letter which you can press to activate that command. ... The letters are indicated by small "KeyTips" which indicate the letter to press to activate the control. KeyTips are displayed using the Alt key, so using them feels similar to how menu navigation works in Windows. (Source: http://blogs.msdn.com/jensenh/archive/2006/04/12/574930.aspx) An example of the Key Tips can be shown as follows. In this diagram, the use has pressed the ALT key, and is awaiting further input. Are there any WPF Open Source examples of "Key Tips"? How would you go about implementing something like this feature in a generic way (i.e. not requiring a Ribbon)? How would you implement this using a MVVM pattern (given that ICommand does not support InputBindings). Note: ActiPro have implemented this feature in their implementation of a Ribbon, but they have not released source code.

    Read the article

  • Is there a way to optimize this mysql query...?

    - by SpikETidE
    Hi Everyone... Say, I got these two tables.... Table 1 : Hotels hotel_id hotel_name 1 abc 2 xyz 3 efg Table 2 : Payments payment_id payment_date hotel_id total_amt comission p1 23-03-2010 1 100 10 p2 23-03-2010 2 50 5 p3 23-03-2010 2 200 25 p4 23-03-2010 1 40 2 Now, I need to get the following details from the two tables Given a particular date (say, 23-03-2010), the sum of the total_amt for each of the hotel for which a payment has been made on that particular date. All the rows that has the date 23-03-2010 ordered according to the hotel name A sample output is as follows... +------------+------------+------------+---------------+ | hotel_name | date | total_amt | commission | +------------+------------+------------+---------------+ | * abc | 23-03-2010 | 140 | 12 | +------------+------------+------------+---------------+ |+-----------+------------+------------+--------------+| || paymt_id | date | total_amt | commission || |+-----------+------------+------------+--------------+| || p1 | 23-03-2010 | 100 | 10 || |+-----------+------------+------------+--------------+| || p4 | 23-03-2010 | 40 | 2 || |+-----------+------------+------------+--------------+| +------------+------------+------------+---------------+ | * xyz | 23-03-2010 | 250 | 30 | +------------+------------+------------+---------------+ |+-----------+------------+------------+--------------+| || paymt_id | date | total_amt | commission || |+-----------+------------+------------+--------------+| || p2 | 23-03-2010 | 50 | 5 || |+-----------+------------+------------+--------------+| || p3 | 23-03-2010 | 200 | 25 || |+-----------+------------+------------+--------------+| +------------------------------------------------------+ Above the sample of the table that has to be printed... The idea is first to show the consolidated detail of each hotel, and when the '*' next to the hotel name is clicked the breakdown of the payment details will become visible... But that can be done by some jquery..!!! The table itself can be generated with php... Right now i am using two separate queries : One to get the sum of the amount and commission grouped by the hotel name. The next is to get the individual row for each entry having that date in the table. This is, of course, because grouping the records for calculating sum() returns only one row for each of the hotel with the sum of the amounts... Is there a way to combine these two queries into a single one and do the operation in a more optimized way...?? Hope i am being clear.. Thanks for your time and replies...

    Read the article

  • Debian packaging of a Python package.

    - by chrisdew
    I need to write (or find) a script to create a Debian package (using python-support) from a Python package. The Python package will be pure Python (no C extensions). The Python package (for testing purposes) will just be a directory with an empty __init__.py file and a single Python module, package_test.py. The packaging script must use python-support to provide the correct bytecode for possible multiple installations of Python on a target platform. (i.e. v2.5 and v2.6 on Ubuntu Jaunty.) Most advice I find while googling are just examples nasty hacks that don't even use python-support or python-central. I have so far spent hours researching this, and the best I can come up with is to hack around the script from an existing open source project - but I don't know which bits are required for what I'm doing. Has anyone here made a Debian package out of a Python package in a reasonably non-hacky way? I'm starting to think that it will take me more than a week to go from no knowledge of Debian packaging and python-support to getting a working script. How long has it taken others? Any advice? Chris.

    Read the article

  • LINQ Query using UDF that receives parameters from the query

    - by Ben Fidge
    I need help using a UDF in a LINQ which calculates a users position from a fixed point. int pointX = 567, int pointY = 534; // random points on a square grid var q = from n in _context.Users join m in _context.GetUserDistance(n.posY, n.posY, pointX, pointY, n.UserId) on n.UserId equals m.UserId select new User() { PosX = n.PosX, PosY = n.PosY, Distance = m.Distance, Name = n.Name, UserId = n.UserId }; The GetUserDistance is just a UDF that returns a single row in a TVP with that users distance from the points deisgnated in pointX and pointY variables, and the designer generates the following for it: [global::System.Data.Linq.Mapping.FunctionAttribute(Name="dbo.GetUserDistance", IsComposable=true)] public IQueryable<GetUserDistanceResult> GetUserDistance([global::System.Data.Linq.Mapping.ParameterAttribute(Name="X1", DbType="Int")] System.Nullable<int> x1, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="X2", DbType="Int")] System.Nullable<int> x2, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="Y1", DbType="Int")] System.Nullable<int> y1, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="Y2", DbType="Int")] System.Nullable<int> y2, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="UserId", DbType="Int")] System.Nullable<int> userId) { return this.CreateMethodCallQuery<GetUserDistanceResult>(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), x1, x2, y1, y2, userId); } when i try to compile i get The name 'n' does not exist in the current context

    Read the article

  • Windows Form user control hosted in WPF - How to capture the leave event?

    - by OKB
    Hi, In my WPF application I'm hosting a custom Windows Form User Control together with other wpf controls. My custom user control is hosted in wpf using a WindowsFormsHost control. This custom user control contains (the parent so to speak) other custom win form controls (children controls). The children controls can be single or composite controls. How can I capture the leave event on a child control when the user navigates from the last child user control in the parent custom user control to a wpf user control? According to MSDN (http://msdn.microsoft.com/en-us/library/ms751797.aspx) the leave event is not supported in following scenarios: Enter and Leave events are not raised when the following focus changes occur: 1. From inside to outside a WindowsFormsHost control. 2. From outside to inside a WindowsFormsHost control. 3. Outside a WindowsFormsHost control. 4. From a Windows Forms control hosted in a WindowsFormsHost control to an ElementHost control hosted inside the same WindowsFormsHost. Scenario 1 and 2 is exactly what I struggle with. Do you have any solution to this problem? Some workaround or anything is appreciated:) Best Regards, OKB

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

  • Intent Bundle returns Null every time?

    - by Max
    I have some extras sent to a new intent. There it grabs the bundle and tests if it is null. Every single time it is null even though I am able to get the values passed and use them. Can anyone see what is wrong with the if statement? Intent i = getIntent(); Bundle b = i.getExtras(); int picked = b.getInt("PICK"); int correct = b.getInt("CORR"); type = b.getString("RAND"); if(b == null || !b.containsKey("WELL")) { Log.v("BUNDLE", "bun is null"); } else { Log.v("BUNDLE", "Got bun well"); } EDIT: Heres where the bundle is created. Intent intent = new Intent(this, app.pack.son.class); Bundle b = new Bundle(); b.putInt("PICK", pick); b.putInt("CORR", corr); b.putString("RAND", "yes"); intent.putExtras(b); startActivity(intent);

    Read the article

  • Database design for a media server containing movies, music, tv and everything in between?

    - by user364114
    In the near future I am attempting to design a media server as a personal project. MY first consideration to get the project underway is architecture, it will certainly be web based but more specifically than that I am looking for suggestions on the database design. So far I am considering something like the following, where I am using [] to represent a table, the first text is the table name to give an idea of purpose and the items within {} would be fields of the table. Also not, fid is functional id referencing some other table. [Item {id, value/name, description, link, type}] - this could be any entity, single song or whole music album, game, movie - almost see this as a recursive relation, ie. a song is an item but an album that song is part of is also an item or for example a tv season is an item, with multiple items being tv episodes [Type {id, fid, mime type, etc}] - file type specific information - could identify how code handles streaming/sending this item to a user [Location {id, fid, path to file?}] [Users {id, username, email, password, ...? }] - user account information [UAC {id, fid, acess level}] - i almost feel its more flexible to seperate access control permissions form the user accounts themselves [ItemLog {id, fid, fid2, timestamp}] - fid for user id, and fid2 for item id - this way we know what user access what when [UserLog {id, fid, timestamp}] -both are logs for access, whether login or last item access [Quota {id, fid, cap}] - some sort of way to throttle users from queing up the entire site and letting it download ... Suggestions or comments are welcome as the hope is that this project will be a open source project once some code is laid out.

    Read the article

  • How to Use Calculated Color Values with ColorMatrix?

    - by Otaku
    I am changing color values of each pixel in an image based on a calculation. The problem is that this takes over 5 seconds on my machine with a 1000x1333 image and I'm looking for a way to optimize it to be much faster. I think ColorMatrix may be an option, but I'm having a difficult time figure out how I would get a set of pixel RGB values, use that to calculate and then set the new pixel value. I can see how this can be done if I was just modifying (multiplying, subtracting, etc.) the original value with ColorMatrix, but now how I can use the pixels returned value to use it to calculate and new value. For example: Sub DarkenPicture() Dim clrTestFolderPath = "C:\Users\Me\Desktop\ColorTest\" Dim originalPicture = "original.jpg" Dim Luminance As Single Dim bitmapOriginal As Bitmap = Image.FromFile(clrTestFolderPath + originalPicture) Dim Clr As Color Dim newR As Byte Dim newG As Byte Dim newB As Byte For x = 0 To bitmapOriginal.Width - 1 For y = 0 To bitmapOriginal.Height - 1 Clr = bitmapOriginal.GetPixel(x, y) Luminance = ((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B))/ 255 newR = Clr.R * Luminance newG = Clr.G * Luminance newB = Clr.B * Luminance bitmapOriginal.SetPixel(x, y, Color.FromArgb(newR, newG, newB)) Next Next bitmapOriginal.Save(clrTestFolderPath + "colorized.jpg", ImageFormat.Jpeg) End Sub The Luminance value is the calculated one. I know I can set ColorMatrix's M00, M11, M22 to 0, 0, 0 respectively and then put a new value in M40, M41, M42, but that new value is calculated based of a value multiplication and addition of that pixel's components (((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B)) and the result of that - Luminance - is multiplied by the color component). Is this even possible with ColorMatrix?

    Read the article

  • Visual Studio 2008 project file does not load because of an unexpected encoding change.

    - by Xenan
    In our team we have a database project in visual Studio 2008 which is under source control by Team Foundation Server. Every two weeks or so, after one co-worker checks in, the project file won't load on the other developers machines. The error message is: The project file could not be loaded. Data at the root level is invalid. Line 1, position 1. When I look at the project file in Notepad++, the file looks like this: ??<NUL?NULxNULmNULlNUL NULvNULeNULrNULsNULiNULoNULnNUL ... and so on (you can see <?xml version in this) whereas an normal project file looks like: <?xml version="1.0" encoding="utf-16"?> ... So probably something is wrong with the encoding of the file. This is a problem for us because it turns out to be impossible to get the file encoding correct again. The 'solution' is to throw away the project file an get the last know working version from source control. According to the file, the encoding should be UTF-16. According to Notepad++, the corrupted file is actually UTF-8. My questions are: Why is Visual Studio messing up the encoding of the project file, apparently at random times and at random machines? What should we do to prevent this? When it has happened, is there a possibility to restore the current file in the correct encoding instead of pulling an older version from source control? As a last note: the problem is with one single project file, all other project files don't expose this problem.

    Read the article

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • Is It Incorrect to Make Domain Objects Aware of The Data Access Layer?

    - by Noah Goodrich
    I am currently working on rewriting an application to use Data Mappers that completely abstract the database from the Domain layer. However, I am now wondering which is the better approach to handling relationships between Domain objects: Call the necessary find() method from the related data mapper directly within the domain object Write the relationship logic into the native data mapper (which is what the examples tend to do in PoEAA) and then call the native data mapper function within the domain object. Either it seems to me that in order to preserve the 'Fat Model, Skinny Controller' mantra, the domain objects have to be aware of the data mappers (whether it be their own or that they have access to the other mappers in the system). Additionally it seems that Option 2 unnecessarily complicates the data access layer as it creates table access logic across multiple data mappers instead of confining it to a single data mapper. So, is it incorrect to make the domain objects aware of the related data mappers and to call data mapper functions directly from the domain objects? Update: These are the only two solutions that I can envision to handle the issue of relations between domain objects. Any example showing a better method would be welcome.

    Read the article

  • Hotkeys in webapps

    - by Johoo
    When creating webapps, is there any guidelines on which keys you can use for your own hotkeys without overriding too many of the browsers default hotkeys. For example i might want to have a custom copy command for copying entire sets of data that only makes sense for my program instead of just text. The logical combination for this would be ctrl+c but that would destroy the default copy hotkey for normal text. One solution i was thinking about is to only catch the hotkey when it "makes sense" but when you use some advanced custom selection it might be hard to differentiate if your data is focused, if text is selected or both. Right now i am only using single keys as the hotkey, so just 'c' for the example above and this seems to be what most other sites are doing too. The problem is that if you have text input this doesn't work so good. Is this the best solution? To clarify I'm talking about advanced webapps that behave more like normal programs and not just some website presenting information(even though i think these guidlines would be valid for both cases). So for the copy example it might not be a big deal if you can't copy the text in the menu but when ctrl+tab, alt+d or ctrl+e doesn't work i would be really pissed, cough flash cough.

    Read the article

  • Modify loggingConfiguration Programmatic (enterprise library)

    - by alhambraeidos
    Hi all, I have app.config in m win application, and loggingConfiguration section (enterprise library 4.1). I need do this programatically, Get a list of all listener in loggingConfiguration Modify property fileName=".\Trazas\Excepciones.log" of several RollingFlatFileTraceListener's Modify several properties of AuthenticatingEmailTraceListener listener, Any help, please, I havent found any reference or samples Thanks in advanced. Greetings <listeners> <add name="Excepciones RollingFile Listener" fileName=".\Trazas\Excepciones.log" formatter="Text Single Formatter" footer="&lt;/Excepcion&gt;" header="&lt;Excepcion&gt;" rollFileExistsBehavior="Overwrite" rollInterval="None" rollSizeKB="1500" timeStampPattern="yyyy-MM-dd" listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" traceOutputOptions="None" filter="All" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="AuthEmailTraceListener" type="zzzz.Frk.Logging.AuthEmailTraceListener.AuthenticatingEmailTraceListener, zzzz.Frk.Logging.AuthEmailTraceListener" listenerDataType="zzzz.Frk.Logging.AuthEmailTraceListener.AuthenticatingEmailTraceListenerData, zzzz.Frk.Logging.AuthEmailTraceListener" formatter="Exception Formatter" traceOutputOptions="None" toAddress="[email protected]" fromAddress="[email protected]" subjectLineStarter=" Excepción detectada - " subjectLineEnder="incidencias" smtpServer="smtp.gmail.com" smtpPort="587" authenticate="true" username="[email protected]" password="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" enableSsl="true" />

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >