Search Results

Search found 970 results on 39 pages for 'noise reduction'.

Page 32/39 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Getting Started with Cloud Computing

    - by juanlarios
    You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between "Public Cloud" and "Private Cloud". I want to address these questions and help you get started. Let's begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy. Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others. Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful. So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure. For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on. Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call "The Global Relationship Study" – they reach out and contact you to see how they're doing and what Microsoft could do better. If you get an email from "Microsoft Feedback" with the subject line "Help Microsoft Focus on Customers and Partners" between March 5th and April 13th, please take a little time to tell them what you think. Cloud Computing Resources: Microsoft Server and Cloud Computing site – information on Microsoft's overall cloud strategy and products. Microsoft Virtual Academy – for free online training to help improve your IT skillset. Office 365 Trial/Info page – get more information or try it out for yourself. Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud. Windows Intune Trial/Info – get more information or try it out for yourself. Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online. Additional Resources You May Find Useful: Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure. TechNet Evaluation Center Try some of our latest Microsoft products for free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers. Ask questions and participate real-time or watch the on-demand recording. Tech·Days Online Discover what's next in technology and innovation with Tech·Days session recordings, hands-on labs and Tech·Days TV.

    Read the article

  • Image Erosion for face detection in C#

    - by Chris Dobinson
    Hi, I'm trying to implement face detection in C#. I currently have a black + white outline of a photo with a face within it (Here). However i'm now trying to remove the noise and then dilate the image in order to improve reliability when i implement the detection. The method I have so far is here: unsafe public Image Process(Image input) { Bitmap bmp = (Bitmap)input; Bitmap bmpSrc = (Bitmap)input; BitmapData bmData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); int stride = bmData.Stride; int stride2 = bmData.Stride * 2; IntPtr Scan0 = bmData.Scan0; byte* p = (byte*)(void*)Scan0; int nOffset = stride - bmp.Width * 3; int nWidth = bmp.Width - 2; int nHeight = bmp.Height - 2; var w = bmp.Width; var h = bmp.Height; var rp = p; var empty = CompareEmptyColor; byte c, cm; int i = 0; // Erode every pixel for (int y = 0; y < h; y++) { for (int x = 0; x < w; x++, i++) { // Middle pixel cm = p[y * w + x]; if (cm == empty) { continue; } // Row 0 // Left pixel if (x - 2 > 0 && y - 2 > 0) { c = p[(y - 2) * w + (x - 2)]; if (c == empty) { continue; } } // Middle left pixel if (x - 1 > 0 && y - 2 > 0) { c = p[(y - 2) * w + (x - 1)]; if (c == empty) { continue; } } if (y - 2 > 0) { c = p[(y - 2) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y - 2 > 0) { c = p[(y - 2) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y - 2 > 0) { c = p[(y - 2) * w + (x + 2)]; if (c == empty) { continue; } } // Row 1 // Left pixel if (x - 2 > 0 && y - 1 > 0) { c = p[(y - 1) * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0 && y - 1 > 0) { c = p[(y - 1) * w + (x - 1)]; if (c == empty) { continue; } } if (y - 1 > 0) { c = p[(y - 1) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y - 1 > 0) { c = p[(y - 1) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y - 1 > 0) { c = p[(y - 1) * w + (x + 2)]; if (c == empty) { continue; } } // Row 2 if (x - 2 > 0) { c = p[y * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0) { c = p[y * w + (x - 1)]; if (c == empty) { continue; } } if (x + 1 < w) { c = p[y * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w) { c = p[y * w + (x + 2)]; if (c == empty) { continue; } } // Row 3 if (x - 2 > 0 && y + 1 < h) { c = p[(y + 1) * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0 && y + 1 < h) { c = p[(y + 1) * w + (x - 1)]; if (c == empty) { continue; } } if (y + 1 < h) { c = p[(y + 1) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y + 1 < h) { c = p[(y + 1) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y + 1 < h) { c = p[(y + 1) * w + (x + 2)]; if (c == empty) { continue; } } // Row 4 if (x - 2 > 0 && y + 2 < h) { c = p[(y + 2) * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0 && y + 2 < h) { c = p[(y + 2) * w + (x - 1)]; if (c == empty) { continue; } } if (y + 2 < h) { c = p[(y + 2) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y + 2 < h) { c = p[(y + 2) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y + 2 < h) { c = p[(y + 2) * w + (x + 2)]; if (c == empty) { continue; } } // If all neighboring pixels are processed // it's clear that the current pixel is not a boundary pixel. rp[i] = cm; } } bmpSrc.UnlockBits(bmData); return bmpSrc; } As I understand it, in order to erode the image (and remove the noise), we need to check each pixel to see if it's surrounding pixels are black, and if so, then it is a border pixel and we need not keep it, which i believe my code does, so it is beyond me why it doesn't work. Any help or pointers would be greatly appreciated Thanks, Chris

    Read the article

  • ProgrammingError when aggregating over an annotated & grouped Django ORM query

    - by ento
    I'm trying to construct a query to get the "average, maximum, minimum number of items purchased by a single user". The data source is this simple sales record table: class SalesRecord(models.Model): id = models.IntegerField(primary_key=True) user_id = models.IntegerField() product_code = models.CharField() price = models.IntegerField() created_at = models.DateTimeField() A new record is inserted into this table for every item purchased by a user. Here's my attempt at building the query: q = SalesRecord.objects.all() q = q.values('user_id').annotate( # group by user and count the # of records count=Count('id'), # (= # of items) ).order_by() result = q.aggregate(Max('count'), Min('count'), Avg('count')) When I try to execute the code, a ProgrammingError is raised at the last line: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM (SELECT sales_records.user_id AS user_id, COUNT(sales_records.`' at line 1") Django's error screen shows that the SQL is SELECT FROM (SELECT `sales_records`.`player_id` AS `player_id`, COUNT(`sales_records`.`id`) AS `count` FROM `sales_records` WHERE (`sales_records`.`created_at` >= %s AND `sales_records`.`created_at` <= %s ) GROUP BY `sales_records`.`player_id` ORDER BY NULL) subquery It's not selecting anything! Can someone please show me the right way to do this? Hacking Django I've found that clearing the cache of selected fields in django.db.models.sql.BaseQuery.get_aggregation() seems to solve the problem. Though I'm not really sure this is a fix or a workaround. @@ -327,10 +327,13 @@ # Remove any aggregates marked for reduction from the subquery # and move them to the outer AggregateQuery. + self._aggregate_select_cache = None + self.aggregate_select_mask = None for alias, aggregate in self.aggregate_select.items(): if aggregate.is_summary: query.aggregate_select[alias] = aggregate - del obj.aggregate_select[alias] + if alias in obj.aggregate_select: + del obj.aggregate_select[alias] ... yields result: {'count__max': 267, 'count__avg': 26.2563, 'count__min': 1}

    Read the article

  • Difference in performance between Stax and DOM parsing

    - by Fazal
    I have been using DOM for a long time and as such DOM parsing performance wise has been pretty good. Even when dealing with XML of about 4-7 MB the parsing has been fast. The issue we face with DOM is the memory footprint which become huge as soon as we start dealing with large XMLs. Lately I tried moving to Stax (Streaming parsers for XML) which are supposed top be second generation parsers (reading about Stax it said its the fastest parser now). When I tried stax parser for large XML for about 4MB memory footprint definitely reduced drastically but time take to parse entire XML and create java object out of it increased almost by 5 times over DOM. I used sjsxp.jar implementation of Stax. I can deuce to some extent logically that performance may not be extremely good due to streaming nature of the parser but a reduction of 5 time (e.g. DOM takes about 8 seconds to build object for this XML, whereas Stax parsing took about 40 seconds on average) is definitely not going to be acceptable. Am I missing some point here completely as I am not able to come to terms with these performance numbers

    Read the article

  • Are there any radix/patricia/critbit trees for Python?

    - by Andrew Dalke
    I have about 10,000 words used as a set of inverted indices to about 500,000 documents. Both are normalized so the index is a mapping of integers (word id) to a set of integers (ids of documents which contain the word). My prototype uses Python's set as the obvious data type. When I do a search for a document I find the list of N search words and their corresponding N sets. I want to return the set of documents in the intersection of those N sets. Python's "intersect" method is implemented as a pairwise reduction. I think I can do better with a parallel search of sorted sets, so long as the library offers a fast way to get the next entry after i. I've been looking for something like that for some time. Years ago I wrote PyJudy but I no longer maintain it and I know how much work it would take to get it to a stage where I'm comfortable with it again. I would rather use someone else's well-tested code, and I would like one which supports fast serialization/deserialization. I can't find any, or at least not any with Python bindings. There is avltree which does what I want, but since even the pair-wise set merge take longer than I want, I suspect I want to have all my operations done in C/C++. Do you know of any radix/patricia/critbit tree libraries written as C/C++ extensions for Python? Failing that, what is the most appropriate library which I should wrap? The Judy Array site hasn't been updated in 6 years, with 1.0.5 released in May 2007. (Although it does build cleanly so perhaps It Just Works.)

    Read the article

  • OpenMP - running things in parallel and some in sequence within them

    - by Sayan Ghosh
    Hi, I have a scenario like: for (i = 0; i < n; i++) { for (j = 0; j < m; j++) { for (k = 0; k < x; k++) { val = 2*i + j + 4*k if (val != 0) { for(t = 0; t < l; t++) { someFunction((i + t) + someFunction(j + t) + k*t) } } } } } Considering this is block A, Now I have two more similar blocks in my code. I want to put them in parallel, so I used OpenMP pragmas. However I am not able to parallelize it, because I am a tad confused that which variables would be shared and private in this case. If the function call in the inner loop was an operation like sum += x, then I could have added a reduction clause. In general, how would one approach parallelizing a code using OpenMP, when we there is a nested for loop, and then another inner for loop doing the main operation. I tried declaring a parallel region, and then simply putting pragma fors before the blocks, but definitely I am missing a point there! Thanks, Sayan

    Read the article

  • Hierarchy of meaning

    - by asldkncvas
    I am looking for a method to build a hierarchy of words. Background: I am a "amateur" natural language processing enthusiast and right now one of the problems that I am interested in is determining the hierarchy of word semantics from a group of words. For example, if I have the set which contains a "super" representation of others, i.e. [cat, dog, monkey, animal, bird, ... ] I am interested to use any technique which would allow me to extract the word 'animal' which has the most meaningful and accurate representation of the other words inside this set. Note: they are NOT the same in meaning. cat != dog != monkey != animal BUT cat is a subset of animal and dog is a subset of animal. I know by now a lot of you will be telling me to use wordnet. Well, I will try to but I am actually interested in doing a very domain specific area which WordNet doesn't apply because: 1) Most words are not found in Wordnet 2) All the words are in another language; translation is possible but is to limited effect. another example would be: [ noise reduction, focal length, flash, functionality, .. ] so functionality includes everything in this set. I have also tried crawling wikipedia pages and applying some techniques on td-idf etc but wikipedia pages doesn't really do much either. Can someone possibly enlighten me as to what direction my research should go towards? (I could use anything)

    Read the article

  • OpenCL - incremental summation during compute

    - by user1721997
    I'm absolutelly novice in OpenCL programming. For my app. (molecular simulaton) I wrote a kernel for calculate intermolecular potential of lennard-jones liquid. In this kernel I need to compute cumulative value of the potential of all particles with one: __kernel void Molsim(__global const float* inmatrix, __global float* fi, const int c, const float r1, const float r2, const float r3, const float rc, const float epsilon, const float sigma, const float h1, const float h23) { float fi0; float fi1; float d; unsigned int i = get_global_id(0); //number of particles (typically 2000) if(c!=i) { // potential before particle movement d=sqrt(pow((0.5*h1-fabs(0.5*h1-fabs(inmatrix[c*3]-inmatrix[i*3]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(inmatrix[c*3+1]-inmatrix[i*3+1]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(inmatrix[c*3+2]-inmatrix[i*3+2]))),2.0)); if(d<rc) { fi0=4.0*epsilon*(pow(sigma/d,12.0)-pow(sigma/d,6.0)); } else { fi0=0; } // potential after particle movement d=sqrt(pow((0.5*h1-fabs(0.5*h1-fabs(r1-inmatrix[i*3]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(r2-inmatrix[i*3+1]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(r3-inmatrix[i*3+2]))),2.0)); if(d<rc) { fi1=4.0*epsilon*(pow(sigma/d,12.0)-pow(sigma/d,6.0)); } else { fi1=0; } // cumulative difference of potentials fi[0]+=fi1-fi0; } } My problem is in the line: fi[0]+=fi1-fi0;. In the one-element vector fi[0] are wrong results. I read something about sum reduction, but I do not know how to do it during the calculation. Exist any simple solution of my problem?

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • How to change radius of rounded rectangle in Photoshop

    - by MattDiPasquale
    At this point, I'm thinking of just using CSS 3, esp. since I'm a programmer, but I'd like to do this with Photoshop because I think it's nicer since I'm working with images anyway, among other reasons... Before I move on, my first question is: Is there a place like SuperUser for designers (or for Photoshop-like or questions)? What I Want: I want the icons on http://www.mattdipasquale.com/ to look like those on http://about.me/mattdipasquale. About.me has an outdated Twitter icon and does not have icons for GitHub, StackOverflow, etc. So, although I like the look of their icons, I want to be able to create these icons myself instead of using their versions. What I Have: I have different iphone icons, like the Facebook iPhone icon, Twitter iPhone icon, etc., that I got from iTunes, using Firebug to find the URL of the background image. I opened them up in Photoshop and pressed option + command + i to reduce the image size to 32px x 32px with Bicubic Sharper (best for reduction). I now have a square icon layer. Closing the Gap: In addition to the icon layer, I want to have a clipping-mask layer that will apply the 5px rounded-corners, 1px stroke, and 1px bevel. (Note: I just want to apply effects to the edges of the icon because the gloss and other effects are already encoded in the iTunes image. Also, I'm just guessing about the pixel values, but I want it to look good, like the icons on about.me.) What settings should I use for the blend options to make the icons look good, like iphone icons or those used by about.me? Why a Clipping Mask? The reason I want to use a clipping mask is that I want ease of reproducibility. I want to be able to apply the same styling to other square icon layers by simply replacing the square icon layer and then saving for web. If there is a better way to achieve such ease of reproducibility, please suggest it. I've seen Photoshop iPhone icon templates, but I couldn't figure out how to use them with my own images. Thanks! Matt

    Read the article

  • How to extract $lastexitcode from c# powershell script execution.

    - by scope-creep
    Hi, I've got a scipt executing in C# using the powershell async execution code on code project here: http://www.codeproject.com/KB/threads/AsyncPowerShell.aspx?display=PrintAll&fid=407636&df=90&mpp=25&noise=3&sort=Position&view=Quick&select=2130851#xx2130851xx I need to return the $lastexitcode and Jean-Paul describes how you can use a custom pshost class to return it. I can't find any method or property in pshost that returns the exit code. This engine I have needs to ensure that script executes correctly. Any help would be appreciated. regards Bob. Its the $lastexitcode and the $? variables I need to bring back. Hi, Finally answered. I found out about the $host variable. It implements a callback into the host, specifically a custom PSHost object, enabling you to return the $lastexitcode. Here is a link to an explanation of $host. http://mshforfun.blogspot.com/2006/08/do-you-know-there-is-host-variable.html It seems to be obscure, badly documented, as usual with powershell docs. Using point 4, calling $host.SetShouldExit(1) returns 1 to the SetShouldExit method of pshost, as described here. http://msdn.microsoft.com/en-us/library/system.management.automation.host.pshost.setshouldexit(VS.85).aspx Its really depends on defining your own exit code defintion. 0 and 1 suffixes I guess. regards Bob.

    Read the article

  • JOGL Double Buffering

    - by Bar
    What is eligible way to implement double buffering in JOGL (Java OpenGL)? I am trying to do that by the following code: ... /** Creating canvas. */ GLCapabilities capabilities = new GLCapabilities(); capabilities.setDoubleBuffered(true); GLCanvas canvas = new GLCanvas(capabilities); ... /** Function display(…), which draws a white Rectangle on a black background. */ public void display(GLAutoDrawable drawable) { drawable.swapBuffers(); gl = drawable.getGL(); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); gl.glColor3f(1.0f, 1.0f, 1.0f); gl.glBegin(GL.GL_POLYGON); gl.glVertex2f(-0.5f, -0.5f); gl.glVertex2f(-0.5f, 0.5f); gl.glVertex2f(0.5f, 0.5f); gl.glVertex2f(0.5f, -0.5f); gl.glEnd(); } ... /** Other functions are empty. */ Questions: — When I'm resizing the window, I usually get flickering. As I see it, I have a mistake in my double buffering implementation. — I have doubt, where I must place function swapBuffers — before or after (as many sources says) the drawing? As you noticed, I use function swapBuffers (drawable.swapBuffers()) before drawing a rectangle. Otherwise, I'm getting a noise after resize. So what is an appropriate way to do that? Including or omitting the line capabilities.setDoubleBuffered(true) does not make any effect.

    Read the article

  • Adding & Removing Dynamic Controls in C# WinForms.

    - by gsvirdi
    I have three Tabs in my WinForm; depending on the selected RaioButton in the TabPages[0] I added few dynamic controls on the relevant TabPage. On Button_Click event the controls are added, but the prob is the I'm not able to remove the Dynamically added controls from the other (irrelevant) TabPage. Here's my code: Label label235 = new Label(); TextBox tbMax = new TextBox(); label235.Name = "label235"; tbMax.Name = "txtBoxNoiseMax"; label235.Text = "Noise"; tbMax.ReadOnly = true; label235.ForeColor = System.Drawing.Color.Blue; tbMax.BackColor = System.Drawing.Color.White; label235.Size = new Size(74, 13); tbMax.Size = new Size(85, 20); if (radioButton1.Checked) { label235.Location = new Point(8, 476); tbMax.Location = new Point(138, 473); tabControl.TabPages[1].Controls.Add(label235); tabControl.TabPages[1].Controls.Add(tbMax); tabControl.TabPages[2].Controls.RemoveByKey("label235"); tabControl.TabPages[2].Controls.RemoveByKey("tbMax"); } else { label235.Location = new Point(8, 538); tbMax.Location = new Point(138, 535); tabControl.TabPages[1].Controls.RemoveByKey("label235"); tabControl.TabPages[1].Controls.RemoveByKey("tbMax"); tabControl.TabPages[2].Controls.Add(label235); tabControl.TabPages[2].Controls.Add(tbMax); } Where am I making that mistake?????

    Read the article

  • How to avoid saving a blank model which attributes can be blank

    - by auralbee
    Hello people, I have two models with a HABTM association, let´s say book and author. class Book has_and_belongs_to_many :authors end class Author has_and_belongs_to_many :books end The author has a set of attributes (e.g. first-name,last-name,age) that can all be blank (see validation). validates_length_of :first_name, :maximum => 255, :allow_blank => true, :allow_nil => false In the books_controller, I do the following to append all authors to a book in one step: @book = Book.new(params[:book]) @book.authors.build(params[:book][:authors].values) My question: What would be the easiest way to avoid the saving of authors which fields are all blank to prevent too much "noise" in the database? At the moment, I do the following: validate :must_have_some_data def must_have_some_data empty = true hash = self.attributes hash.delete("created_at") hash.delete("updated_at") hash.each_value do |value| empty = false if value.present? end if (empty) errors.add_to_base("Fields do not contain any data.") end end Maybe there is an more elegant, Rails-like way to do that. Thanks.

    Read the article

  • Should all public methods of an API be documented?

    - by cynicalman
    When writing "library" type classes, is it better practice to always write markup documentation (i.e. javadoc) in java or assume that the code can be "self-documenting"? For example, given the following method stub: /** * Copies all readable bytes from the provided input stream to the provided output * stream. The output stream will be flushed, but neither stream will be closed. * * @param inStream an InputStream from which to read bytes. * @param outStream an OutputStream to which to copy the read bytes. * @throws IOException if there are any errors reading or writing. */ public void copyStream(InputStream inStream, OutputStream outStream) throws IOException { // copy the stream } The javadoc seems to be self-evident, and noise that just needs to be updated if the funcion is changed at all. But the sentence about flushing and not closing the stream could be valuable. So, when writing a library, is it best to: a) always document b) document anything that isn't obvious c) never document (code should speak for itself!) I usually use b), myself (since the code can be self-documenting otherwise)...

    Read the article

  • Finding text orientation in image (angle for rotation)

    - by maximus
    There is an image captured by camera, and I need to find the angle of the text in order to rotate it to make the image better for OCR results. So I know that the fourier transform can be used for that purpose, My question is, does it really gives good results or may be it is better to use something different than that? Can you tell me if there is a good method for this purpose? I am afraid that not every image containing the text will give me a good result after using fourier transform method. Actually, if I make like it is written here: link text (see the part related with an example of text image) calculating the logarithm of the magnitude of the Fourier transform of image with text and then thresholding it, I get that points and I can calculate the line approximately passing through them, and after getting the line calculate the angle, and then make an affine transform, But, what if I do not get a good result every time using this method , and make a false transform? Any ideas please to judge wether the result is correct or not, or may be another method is better? The binary image can contain noise, even if there are not so much of them, the angle found as a result can be not accurate.

    Read the article

  • Figuring out the Nyquist performance limitation of an ADC on an example PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a dsPIC microcontroller for an analog-to-digital application. This would be preferable to using dedicated A/D chips and a separate dedicated DSP chip. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! (EDITED NOTE: The PIC10F220 in the example below was selected ONLY to walk through a simple example to check that I'm interpreting Tacq, Fosc, TAD, and divisor correctly in working through this sort of Nyquist analysis. The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).) A simple example: PIC10F220 is the simplest possible PIC with an ADC Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet in obtaining the Nyquist performance limit? Any opinions on the noise susceptibility of ADCs in PIC / dsPIC chips would be much appreciated! AKE

    Read the article

  • sql server 2005 stored procedure unexpected behaviour

    - by user283405
    i have written a simple stored procedure (run as job) that checks user subscribe keyword alerts. when article posted the stored procedure sends email to those users if the subscribed keyword matched with article title. One section of my stored procedure is: OPEN @getInputBuffer FETCH NEXT FROM @getInputBuffer INTO @String WHILE @@FETCH_STATUS = 0 BEGIN --PRINT @String INSERT INTO #Temp(ArticleID,UserID) SELECT A.ID,@UserID FROM CONTAINSTABLE(Question,(Text),@String) QQ JOIN Article A WITH (NOLOCK) ON A.ID = QQ.[Key] WHERE A.ID > @ArticleID FETCH NEXT FROM @getInputBuffer INTO @String END CLOSE @getInputBuffer DEALLOCATE @getInputBuffer This job run every 5 minute and it checks last 50 articles. It was working fine for last 3 months but a week before it behaved unexpectedly. The problem is that it sends irrelevant results. The @String contains user alert keyword and it matches to the latest articles using Full text search. The normal execution time is 3 minutes but its execution time is 3 days (in problem). Now the current status is its working fine but we are unable to find any reason why it sent irrelevant results. Note: I am already removing noise words from user alert keyword. I am using SQL Server 2005 Enterprise Edition.

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • Looping HTML5 audio on the iPhone

    - by Peeps
    I'm trying to make a HTML5 webapp that simply plays a sound over and over and over again, on my iPhone. I don't know any Obj-C to do it natively. What I have works fine, but the sound only plays once: <!DOCTYPE html> <html> <head> <title>noisemaker!</title> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="viewport" content="maximum-scale=1, minimum-scale=1, width=device-width, user-scalable=no" /> <meta name="apple-mobile-web-app-capable" content="yes" /> </head> <body> <audio src="noise.mp3" autoplay controls loop></audio> </body> </html> Is there a way to either bypass the QuickTime audio screen and loop it in the webpage, or get the QuickTime audio screen to loop the sound?

    Read the article

  • Application lifecycle and onCreate method in the the android sdk

    - by Leif Andersen
    I slapped together a simple test application that has a button, and makes a noise when the user clicks on it. Here are it's method: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button b = (Button)findViewById(R.id.easy); b.setOnClickListener(this); } public void onClick(View v) { MediaPlayer mp = MediaPlayer.create(this, R.raw.easy); mp.start(); while(true) { if (!mp.isPlaying()) { mp.release(); break; } } } My question is, why is onCreate acting like it's in a while loop? I can click on the button whenever, and it makes the sound. I might think it was just a property of listeners, but the Button object wasn't a member variable. I thought that Android would just go through onCreate onse, and proceed onto the next lifecycle method. Also, I know that my current way of seeing if the sound is playing is crap...I'll get to that later. :) Thank you.

    Read the article

  • Too many false positives when using FxCop.

    - by mark
    Dear ladies and sirs. We are using FxCop and it generates too many false positives to our liking. For instance, if a private method is invoked using reflection, then this method is reported as potentially unused - understandable and we suppress this warning explicitly using the SuppressMessage attribute. However, FxCop reports the same warning for the methods invoked from that method, which we already suppressed warnings about. This is stupid and generates too much noise. There are also false reports on member variables used in these methods. Also, there are problems with generic types (I even saw something about it in MS connect). Anyway, I am wondering if anyone knows whether Microsoft is going to upgrade FxCop, because it seems to be stuck in version 1.36 for a long time. BTW, I we do not use StyleCop, because it is way too picky and we just do not have the time to examine all the zillion messages in order to suppress them all. Besides, the StyleCop report seem to augment, rather than replace FxCop. Maybe anyone can suggest a good alternative to FxCop? We are using VS2008 pro. Thanks.

    Read the article

  • Getting nice sound from Java

    - by Peter Lang
    I managed to play midi files using Java, but it produces some distracting noise. I figured out that this is caused by the poor quality soundbank file shipped with Java 6 SDK/JRE. How can I improve that quality? Here is what I have so far: MidiNote example using a Receiver works fine (sounds the same as when playing midi files with other players), so it does not seem to use the Soundbank shipped with Java but the fallback mechanism that uses a hardware MIDI port. Using SimpleMidiPlayer example to play a Midi file works, but the quality is poor. When I delete lib/audio/soundbank.gm, the quality is not bad any more, so the fallback is used again. When I put soundbank-deluxe.gm into the same directory, it is used and produces much better sound. Messing with the clients soundbank file as described in the official Installation Instructions certainly isn't an option, so I tried to put the new soundbank-file into the jar-file and load it: Soundbank soundbank = MidiSystem.getSoundbank( getClass().getResourceAsStream("soundbank-deluxe.gm")); if(synthesizer.isSoundbankSupported(soundbank)) { System.out.println(synthesizer.loadAllInstruments(soundbank)); } This prints true, but the sound remains unchanged. What am I doing wrong loading the soundbank file? Can I force the hardware MIDI port to be used instead of the standard soundbank file?

    Read the article

  • Overload Resolution and Optional Arguments in C# 4

    - by Dale McCoy
    I am working with some code that has seven overloads of a function TraceWrite: void TraceWrite(string Application, LogLevelENUM LogLevel, string Message, string Data = ""); void TraceWrite(string Application, LogLevelENUM LogLevel, string Message, bool LogToFileOnly, string Data = ""); void TraceWrite(string Application, LogLevelENUM LogLevel, string Message, string PieceID, string Data = ""); void TraceWrite(string Application, LogLevelENUM LogLevel, string Message, LogWindowCommandENUM LogWindowCommand, string Data = ""); void TraceWrite(string Application, LogLevelENUM LogLevel, string Message, bool UserMessage, int UserMessagePercent, string Data = ""); void TraceWrite(string Application, LogLevelENUM LogLevel, string Message, string PieceID, LogWindowCommandENUM LogWindowCommand, string Data = ""); void TraceWrite(string Application, LogLevelENUM LogLevel, string Message, LogWindowCommandENUM LogWindowCommand, bool UserMessage, int UserMessagePercent, string Data = ""); (All public static, namespacing noise elided above and throughout.) So, with that background: 1) Elsewhere, I call TraceWrite with four arguments: string, LogLevelENUM, string, bool, and I get the following errors: error CS1502: The best overloaded method match for 'TraceWrite(string, LogLevelENUM, string, string)' has some invalid arguments error CS1503: Argument '4': cannot convert from 'bool' to 'string' Why doesn't this call resolve to the second overload? (TraceWrite(string, LogLevelENUM, string, bool, string = "")) 2) If I were to call TraceWrite with string, LogLevelENUM, string, string, which overload would be called? The first or the third? And why?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >