Search Results

Search found 6028 results on 242 pages for 'total commander'.

Page 187/242 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Actionscript: NetStream stutters after buffering.

    - by meandmycode
    Using NetStream to stream content from http, I've noticed that esp with certain exported h264's, if the player encounters an empty buffer, it will stop and buffer to the requested length (as expected). However once the buffer is full, the playback doesn't resume, but instead jumps ahead, as such- instantly playing the buffered duration in a brief moment, and thusly triggering an empty buffer again.. this will then continue over and over. Presumably when the netstream pauses to buffer, the playhead position continues, and the player is attempting to snap to that position on resume- however given it could take 5 seconds to build a 2 second buffer- it ends up with a useless buffer again.. (this is an assumption) I've attempted to work around this by listening for an empty buffer netstatus event, pausing the stream, and at the same time setting up a loop to check the current buffer length vs the requested buffer length.. and resuming once the buffer length is greater than or equal to the requested buffer.. however this causes problems when there isn't enough of the video remaining.. for example, a 10 second buffer with only 5 seconds remaining, the loop just sits there waiting for a buffer length of 10 seconds when theres only 5 left... You would think that you could simply check which was smaller, the time left or the requested buffer length.. however the times flash gives are not accurate.. If you add the net streams current time index, plus the buffered time, the total is not the entire duration of the movie (when at the end).. it is close but not the same. This brings me back to the original problem, and if there is another way to fix this, clearly flash knows when the buffer is ready, so how can i get flash pause when it buffers, and resume once the buffer is ready? currently it doesn't.. it pauses and then once the buffer is full- it plays the entire buffered content in about .1 of a second. Thanks in advance, Stephen.

    Read the article

  • MDX equivalent to SQL subqueries with aggregation

    - by James Lampe
    I'm new to MDX and trying to solve the following problem. Investigated calculated members, subselects, scope statements, etc but can't quite get it to do what I want. Let's say I'm trying to come up with the MDX equivalent to the following SQL query: SELECT SUM(netMarketValue) net, SUM(CASE WHEN netMarketValue > 0 THEN netMarketValue ELSE 0 END) assets, SUM(CASE WHEN netMarketValue < 0 THEN netMarketValue ELSE 0 END) liabilities, SUM(ABS(netMarketValue)) gross someEntity1 FROM ( SELECT SUM(marketValue) netMarketValue, someEntity1, someEntity2 FROM <some set of tables> GROUP BY someEntity1, someEntity2) t GROUP BY someEntity1 In other words, I have an account ledger where I hide internal offsetting transactions (within someEntity2), then calculate assets & liabilities after aggregating them by someEntity2. Then I want to see the grand total of those assets & liabilities aggregated by the bigger entity, someEntity1. In my MDX schema I'd presumably have a cube with dimensions for someEntity1 & someEntity2, and marketValue would be my fact table/measure. I suppose i could create another DSV that did what my subquery does (calculating net), and simply create a cube with that as my measure dimension, but I wonder if there is a better way. I'd rather not have 2 cubes (one for these net calculations and another to go to a lower level of granularity for other use cases), since it will be a lot of duplicate info in my database. These will be very large cubes.

    Read the article

  • Building error in Eclipse with build.xml

    - by Zachary
    I am working on a Java project with Eclipse. This project requires a second project (not mine), named sams in its build-path. The sams is provided with a build.xml file and it should generate some code using Apache CXF when building it. When I use Apache ANT on Eclipse and run the cxf.generated command from its build file I get the following error: Buildfile: C:\Docs\ZacRocha\Desktop\sams\build.xml cxf.generated: [echo] Generating code using Apache CXF wsdl2java... [java] 16-Jun-2010 16:04:08 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] 16-Jun-2010 16:04:08 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] WSDLToJava Error: org.apache.cxf.wsdl11.WSDLRuntimeException: Fail to create wsdl definition from : file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivesoftware.wsdl%7d [java] Caused by : WSDLException: faultCode=PARSER_ERROR: Problem parsing 'file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivesoftware.wsdl%7d'.: java.io.FileNotFoundException: C:\Docs\ZacRocha\Desktop\sams\${archivesoftware.wsdl} (The system cannot find the file specified) [java] 16-Jun-2010 16:04:10 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] 16-Jun-2010 16:04:10 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] WSDLToJava Error: org.apache.cxf.wsdl11.WSDLRuntimeException: Fail to create wsdl definition from : file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivehardware.wsdl%7d [java] Caused by : WSDLException: faultCode=PARSER_ERROR: Problem parsing 'file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivehardware.wsdl%7d'.: java.io.FileNotFoundException: C:\Docs\ZacRocha\Desktop\sams\${archivehardware.wsdl} (The system cannot find the file specified) BUILD SUCCESSFUL Total time: 4 seconds I am used to program on Eclipse and I know very little about building with Apache ANT. Can someone tell me where exactly the problem may be? Thanks in advance!

    Read the article

  • shortest digest of a string

    - by meta
    [Description] Given a string of char type, find a shortest digest, which is defined as: a shortest sub-string which contains all the characters in the original string. [Example] A = "aaabedacd" B = "bedac" is the answer. [My solution] Define an integer table with 256 elements, which is used to record the occurring times for each kind of character in the current sub-string. Scan the whole string, statistic the total kinds of character in the given string by using the above table. Use two pointers, start, end, which are initially pointing to the start and (start + 1) of the given string. The current kinds of character is 1. Expand sub-string[start, end) at the end until it contains all kinds of character. Update the shortest digest if possible. Contract sub-string[start, end] at the start by one character each time, try to restore its digest property if necessary by step 4. The time cost is O(n), and the extra space cost is constant. Any better solution without extra space?

    Read the article

  • Linq-To-Entities group by

    - by Oskar Kjellin
    Hey, I'm building a software for timereporting I have a Dictionary<string, Dictionary<string, double>>. The key in the main dictionary is a users name and their value is a dictionary of . I have a function GetDepartment(string UserName) which returns a string with the users department. What I want is to crate a new dictionary, of the same type, that has the department as the main key and in the subdictionary a where hours is the total for that department. I have been trying to do this with linq but did not succeed. Would be very glad for some help here! EDIT: This code does exactly what I want. But I want it in LINQ Dictionary<string, Dictionary<string, double>> temphours = new Dictionary<string, Dictionary<string, double>>(); ; foreach (var user in hours) { string department = GetDepartment(user.Key); if (!temphours.ContainsKey(department)) { temphours.Add(department, new Dictionary<string, double>()); } foreach (var customerReport in user.Value) { if (!temphours[department].ContainsKey(customerReport.Key)) { temphours[department].Add(customerReport.Key, 0); } temphours[department][customerReport.Key] += customerReport.Value; } }

    Read the article

  • What does this svn2git error mean?

    - by Hisham
    I am trying to import my repository from svn to git using svn2git, but it seems like it's failing when it hits a branch. What's the problem? Found possible branch point: https://s.aaa.com/repo/trunk/project => https://s.aaa.com/repo/branches/project-beta1.0, 128 Use of uninitialized value in substitution (s///) at /opt/local/libexec/git-core/git-svn line 1728. Use of uninitialized value in concatenation (.) or string at /opt/local/libexec/git-core/git-svn line 1728. refs/remotes/trunk: 'https://s.aaa.com/repo' not found in '' Running command: git branch -l --no-color * master Running command: git branch -r --no-color trunk Running command: git checkout trunk Note: checking out 'trunk'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at f4e6268... Changing svn repository in cap files Running command: git branch -D master Deleted branch master (was f4e6268). Running command: git checkout -f -b master Switched to a new branch 'master' Running command: git gc Counting objects: 450, done. Delta compression using up to 2 threads. Compressing objects: 100% (368/368), done. Writing objects: 100% (450/450), done. Total 450 (delta 63), reused 450 (delta 63)

    Read the article

  • Pixel shader wierd compilation error

    - by ytrewq
    hi, I'm experiencing with shaders a bit and I keep getting this weird compilation error that's driving me crazy! the following pixel shader code snippet: DirectionVector = normalize(f3LightPosition[i] - PixelPos); LightVec = PixelNormal - DirectionVector; // Get the light strenght factor LightStrFactor = float(abs((LightVec.x + LightVec.y + LightVec.z) / 3.0f)); // TEST!!! LightStrFactor = 1.0f; // Add this light to the total light on this pixel LightVal += f4Light[i] * LightStrFactor; works perfectly, but as soon as i remove the "LightStrFactor = 1.0f;" line, i.e. letting 'LightStrFactor ' value be the result of the calculation above, it fails to compile the shader. LightStrFactor is a float LightVal & f4Light[i] are float4 All the rest are float3. my question is, besides why it doesn't compile, is how come DX compiler cares about the value of a float? even if my values are incorrect, shouldn't it be run-time? the shader compilation code is this: /* Compile the bitch */ if (FAILED(D3DXCompileShaderFromFile(fileName, NULL, NULL, "PS_MAIN", "ps_2_0", 0, &this->m_pCode, NULL, &this->m_constantTable))) GraphicException("Failed to compile pixel shader!"); // <-- gets here :( if (FAILED(g_D3dDevice->CreatePixelShader( (DWORD*)this->m_pCode->GetBufferPointer(), &this->m_hPixelShader ))) GraphicException("Failed to create pixel shader!"); this->m_fLoaded = true; any help is appreciated thanks!!! :]

    Read the article

  • IBasic Accumulator

    - by Tara
    I am trying to do an accumulator in IBasic for a college assignment and I have the general stuff down but I cannot get it to accumulate. The code is below. My question is how do I get it to accumulate and pass to the different module. I'm trying to calculate how many right answers the user gets. Also, i need to calculate the percentage of right answers. so if the user gets 9 out of 10 right theyed answer 90% right. 'October 15, 2009 ' 'Lab 7.5 Programming Challenge 1 - Average Test Scores ' 'This is a dice game ' declare main() declare inputName(name:string) declare getAnswer(num1:int, num2:int) declare getResult(num1:int, num2:int, answer:int) declare avgRight(getRight:int) declare printInfo(name:string, getRight:int, averege:float) openconsole main() do:until inkey$<>"" closeconsole end sub main() def name:string def num1, num2, answer, total, getRight:int def averege:float inputName (name) getRight = 0 For counter = 1 to 10 getRight = getAnswer(num1, num2) getRight = getRight + 1 next counter average = avgRight (getRight) printInfo(Name, getRight, average) end sub inputName (name) Input "Please enter your name: " ,name return sub getAnswer(num1, num2) def answer, getRight:int num1 = rnd (10) + 1 num2 = rnd (10) + 1 Print num1, "+ " ,num2 Input "What is the answer to the equation? " ,answer getRight = getResult(num1, num2, answer) return getRight sub getResult(num1, num2, answer) def getRight:int if answer = num1 + num2 getRight = 1 else getRight = 0 endif return getRight sub avgRight(getRight) def average:float average = getRight / 10 return average sub printInfo(name, getRight, averege) Print "The students name is: " ,name Print "The number right is: " ,getRight Print Using ("&##.#&", "The average right is " ,averege * 100, "%") return

    Read the article

  • MVC 2.0 - JqGrid Sorting with Mulitple Tables

    - by Billy Logan
    I am in the process of implementing the jqGrid and would like to be able to use the sorting functionality. I have run into some issues with sorting columns that are related to the base table. Here is the script to load the grid: public JsonResult GetData(GridSettings grid) { try { using (IWE dataContext = new IWE()) { var query = dataContext.LKTYPE.Include("VWEPICORCATEGORY").AsQueryable(); ////sorting query = query.OrderBy<LKTYPE>(grid.SortColumn, grid.SortOrder); //count var count = query.Count(); //paging var data = query.Skip((grid.PageIndex - 1) * grid.PageSize).Take(grid.PageSize).ToArray(); //converting in grid format var result = new { total = (int)Math.Ceiling((double)count / grid.PageSize), page = grid.PageIndex, records = count, rows = (from host in data select new { TYPE_ID = host.TYPE_ID, TYPE = host.TYPE, CR_ACTIVE = host.CR_ACTIVE, description = host.VWEPICORCATEGORY.description }).ToArray() }; return Json(result, JsonRequestBehavior.AllowGet); } } catch (Exception ex) { //send the error email ExceptionPolicy.HandleException(ex, "Exception Policy"); } //have to return something if there is an issue return Json(""); } As you can see the description field is a part of the related table("VWEPICORCATEGORY") and the order by is targeted at LKTYPE. I am trying to figure out how exactly one goes about sorting that particular field or maybe even a better way to implement this grid using multiple tables and it's sorting functionality. Thanks in advance, Billy

    Read the article

  • For "draggable" div tags that are NOT nested: JQuery/JavaScript div tag “containment” approach/algor

    - by Pete Alvin
    Background: I've created an online circuit design application where .draggable() div tags are containers that contain smaller div containers and so forth. Question: For any particular div tag I need to quickly identify if it contains other div tags (that may in turn contain other div tags). -- Since the div tags are draggable, in the DOM they are NOT nested inside each other but I think are absolutely positioned. So I think that a "hit testing" approach is the only way to determine containment, unless there is some "secret" routine built-in somewhere that could help with this. I've searched JQuery and I don't see any built-in routine for this. Does anyone know of an algorithm that's quicker than O(n^2)? Seems like I have to walk the list of div tags in an outer loop (n) and have an inner loop (another n) to compare against all other div tags and do a "containment test" (position, width, height), building a list of contained div tags. That's n-squared. Then I have to build a list of all nested div tags by concatenating contained lists. So the total would be O(n^2)+n. There must be a better way?

    Read the article

  • WinForms databind to collection property (such as count)

    - by Ornus
    I want to databind to a collection property, such as Count. in general, when I data bind I specify data member for the object property in the collection, not for the actual properties collection itself exposes. for example, I have a list of custom objects. I show them in datagridview. but I also want to show their total count using a separate label. is there a way to do this through databinding? I imagine that somehow I need to force PropertyManager to be used, instead of CurrentyManager? I get the following exception. Notice that DataSource is collection and has TotalValue property. System.ArgumentException: Cannot bind to the property or column TotalValue on the DataSource. Parameter name: dataMember at System.Windows.Forms.BindToObject.CheckBinding() at System.Windows.Forms.Binding.SetListManager(BindingManagerBase bindingManagerBase) at System.Windows.Forms.ListManagerBindingsCollection.AddCore(Binding dataBinding) at System.Windows.Forms.BindingsCollection.Add(Binding binding) at System.Windows.Forms.BindingContext.UpdateBinding(BindingContext newBindingContext, Binding binding) at System.Windows.Forms.Binding.SetBindableComponent(IBindableComponent value) at System.Windows.Forms.ControlBindingsCollection.AddCore(Binding dataBinding) at System.Windows.Forms.BindingsCollection.Add(Binding binding)

    Read the article

  • The cost of passing by shared_ptr

    - by Artem
    I use std::tr1::shared_ptr extensively throughout my application. This includes passing objects in as function arguments. Consider the following: class Dataset {...} void f( shared_ptr< Dataset const > pds ) {...} void g( shared_ptr< Dataset const > pds ) {...} ... While passing a dataset object around via shared_ptr guarantees its existence inside f and g, the functions may be called millions of times, which causes a lot of shared_ptr objects being created and destroyed. Here's a snippet of the flat gprof profile from a recent run: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls s/call s/call name 9.74 295.39 35.12 2451177304 0.00 0.00 std::tr1::__shared_count::__shared_count(std::tr1::__shared_count const&) 8.03 324.34 28.95 2451252116 0.00 0.00 std::tr1::__shared_count::~__shared_count() So, ~17% of the runtime was spent on reference counting with shared_ptr objects. Is this normal? A large portion of my application is single-threaded and I was thinking about re-writing some of the functions as void f( const Dataset& ds ) {...} and replacing the calls shared_ptr< Dataset > pds( new Dataset(...) ); f( pds ); with f( *pds ); in places where I know for sure the object will not get destroyed while the flow of the program is inside f(). But before I run off to change a bunch of function signatures / calls, I wanted to know what the typical performance hit of passing by shared_ptr was. Seems like shared_ptr should not be used for functions that get called very often. Any input would be appreciated. Thanks for reading. -Artem

    Read the article

  • Using Google Visualization in GWT 2.0

    - by nick
    I'm working on learning GWT (total newb) and have a question regarding the Visualiztion API provided by Google. This page: http://code.google.com/p/gwt-google-apis/wiki/VisualizationGettingStarted Describes getting started with a pie chart (which is what I need). However I'm trying to do this in a composite UI using UiBinder. To that end I don't know how to handle the callback correctly that is shown: public class SimpleViz implements EntryPoint { public void onModuleLoad() { // Create a callback to be called when the visualization API // has been loaded. Runnable onLoadCallback = new Runnable() { public void run() { Panel panel = RootPanel.get(); // Create a pie chart visualization. PieChart pie = new PieChart(createTable(), createOptions()); pie.addSelectHandler(createSelectHandler(pie)); panel.add(pie); } }; // Load the visualization api, passing the onLoadCallback to be called // when loading is done. VisualizationUtils.loadVisualizationApi(onLoadCallback, PieChart.PACKAGE); } My First assumption is this would go in the UiBinder constructor, correct? Yet this assumes that I want to place the element in the RootLayoutPanel, and I don't. I can't see an elegant and obvious way of placing it in the binder. I submit that even this guess may be wrong. Any ideas from the experts?

    Read the article

  • android compile error: could not reserve enough space for object heap

    - by moonlightcheese
    I'm getting this error during compilation: Error occurred during initialization of VM Could not create the Java virtual machine. Could not reserve enough space for object heap What's worse, the error occurs intermittently. Sometimes it happens, sometimes it doesn't. It seems to be dependent on the amount of code in the application. If I get rid of some variables or drop some imported libraries, it will compile. Then when I add more to it, I get the error again. I've included the following sources into the application in the [project_root]/src/ directory: org.apache.httpclient (I've stripped all references to log4j from the sources, so don't need it) org.apache.codec (as a dependency) org.apache.httpcore (dependency of httpclient) and my own activity code consisting of nothing more than an instance of HttpClient. I know this has something to do with the amount of memory necessary during compile time or some compiler options, and I'm not really stressing my system while i'm coding. I've got 2GB of memory on this Core Duo laptop and windows reports only 860MB page file usage (haven't used any other memory tools. I should have plenty of memory and processing power for this... and I'm only compiling some common http libs... total of 406 source files. What gives? Android API Level: 5 Android SDK rel 5 JDK version: 1.6.0_12

    Read the article

  • android compile error: could not reserve enough space for object heap

    - by moonlightcheese
    I'm getting this error during compilation: Error occurred during initialization of VM Could not create the Java virtual machine. Could not reserve enough space for object heap What's worse, the error occurs intermittently. Sometimes it happens, sometimes it doesn't. It seems to be dependent on the amount of code in the application. If I get rid of some variables or drop some imported libraries, it will compile. Then when I add more to it, I get the error again. I've included the following sources into the application in the [project_root]/src/ directory: org.apache.httpclient (I've stripped all references to log4j from the sources, so don't need it) org.apache.codec (as a dependency) org.apache.httpcore (dependency of httpclient) and my own activity code consisting of nothing more than an instance of HttpClient. I know this has something to do with the amount of memory necessary during compile time or some compiler options, and I'm not really stressing my system while i'm coding. I've got 2GB of memory on this Core Duo laptop and windows reports only 860MB page file usage (haven't used any other memory tools. I should have plenty of memory and processing power for this... and I'm only compiling some common http libs... total of 406 source files. What gives? edit (4/30/2010-18:24): Just compiled some code where I got the above stated error. I closed some web browser windows and recompiled the same exact code with no edits and it compiled with no issue. this is definitely a compiler issue related to memory usage. Any help would be great.... because I have no idea where to go from here. Android API Level: 5 Android SDK rel 5 JDK version: 1.6.0_12 Sorry I had to repost this question because regardless of whether I use the native HttpClient class in the Android SDK or my custom version downloaded from apache, the error still occurs.

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • Why does my REST request return garbage data?

    - by Alienfluid
    I am trying to use LWP::Simple to make a GET request to a REST service. Here's the simple code: use LWP::Simple; $uri = "http://api.stackoverflow.com/0.8/questions/tagged/php"; $jsonresponse= get $uri; print $jsonresponse; On my local machine, running Ubuntu 10.4, and Perl version 5.10.1: farhan@farhan-lnx:~$ perl --version This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi I can get the correct response and have it printed on the screen. E.g.: farhan@farhan-lnx:~$ head -10 output.txt { "total": 1000, "page": 1, "pagesize": 30, "questions": [ { "tags": [ "php", "arrays", "coding-style" (... snipped ...) But on my host's machine to which I SSH into, I get garbage printed on the screen for the same exact code. I am assuming it has something to do with the encoding, but the REST service does not return the character set type in the response, so how do I force LWP::Simple to use the correct encoding? Any ideas what may be going on here? Here's the version of Perl on my host's machine: [dredd]$ perl --version This is perl, v5.8.8 built for x86_64-linux-gnu-thread-multi

    Read the article

  • noSQL/SQL/RoR: Trying to build scalable ratings table for the game

    - by alexeypro
    I am trying to solve complex thing (as it looks to me). I have next entities: PLAYER (few of them, with names like "John", "Peter", etc.). Each has unique ID. For simplicity let's think it's their name. GAME (few of them, say named "Hide and Seek", "Jump and Run", etc.). Same - each has unique ID. For simplicity of the case let it be it's name for now. SCORE (it's numeric). So, how it works. Each PLAYER can play in multiple GAMES. He gets some SCORE in every GAME. I need to build rating table -- and not one! Table #1: most played GAMES Table #2: best PLAYERS in all games (say the total SCORE in every GAME). Table #3: best PLAYERS per GAME (by SCORE in particularly that GAME). I could be build something straight right away, but that will not work. I will have more than 10,000 players; and 15 games, which will grow for sure. Score can be as low as 0, and as high as 1,000,000 (not sure if higher is possible at this moment) for player in the game. So I really need some relative data. Any suggestions? I am planning to do it with SQL, but may be just using it for key-value storage; anything -- any ideas are welcome. Thank you!

    Read the article

  • c++ OpenCV CVCalibrateCamera2 is causing multiple errors

    - by tlayton
    I am making a simple calibration program in C++ using OpenCV. Everything goes fine until I actually try to call CVCalibrateCamera2. At this point, I get one of several errors: If the number of images which I am using is equal to 4 (which is the number of points being drawn from each image: OpenCV Error: Sizes of input arguments do not match (Both matrices must have the same number of points) in unknown function, file ......\src\cv\cvfundam.cpp, line 870 If the number of images is below 20: OpenCV Error: Bad argument (The total number of matrix elements is not divisible by the new number of rows) in unknown function, file ......\src\cxcore\cxarray.cpp, line 2749 Otherwise, if the number of image is 20 or above: OpenCV Error: Unsupported format or combination of formats (Invalid matrix type) in unknown function, file ......\src\cxcore\cxarray.cpp, line 117 I have checked the arguments for CVCalibrateCamera2 many times, and I am certain that they are of the correct dimensions relative to one another. It seems like somewhere the program is trying to reshape a matrix based on the number of images, but I can't figure out where or why. Any ideas? I am using Eclipse Galileo, MINGW 5.1.6, and OpenCV 2.1.

    Read the article

  • Generating all unique combinations for "drive ya nuts" puzzle

    - by Yuval A
    A while back I wrote a simple python program to brute-force the single solution for the drive ya nuts puzzle. The puzzle consists of 7 hexagons with the numbers 1-6 on them, and all pieces must be aligned so that each number is adjacent to the same number on the next piece. The puzzle has ~1.4G non-unique possibilities: you have 7! options to sort the pieces by order (for example, center=0, top=1, continuing in clockwise order...). After you sorted the pieces, you can rotate each piece in 6 ways (each piece is a hexagon), so you get 6**7 possible rotations for a given permutation of the 7 pieces. Totalling: 7!*(6**7)=~1.4G possibilities. The following python code generates these possible solutions: def rotations(p): for i in range(len(p)): yield p[i:] + p[:i] def permutations(l): if len(l)<=1: yield l else: for perm in permutations(l[1:]): for i in range(len(perm)+1): yield perm[:i] + l[0:1] + perm[i:] def constructs(l): for p in permutations(l): for c in product(*(rotations(x) for x in p)): yield c However, note that the puzzle has only ~0.2G unique possible solutions, as you must divide the total number of possibilities by 6 since each possible solution is equivalent to 5 other solutions (simply rotate the entire puzzle by 1/6 a turn). Is there a better way to generate only the unique possibilities for this puzzle?

    Read the article

  • mysql error in php

    - by fusion
    i'm trying to run this php code which should display a quote from mysql, but can't figure out where is it going wrong. the result variable is null or empty. can someone help me out. thanks! <?php include 'config.php'; // 'text' is the name of your table that contains // the information you want to pull from $rowcount = mysql_query("select count(*) as rows from quotes"); // Gets the total number of items pulled from database. while ($row = mysql_fetch_assoc($rowcount)) { $max = $row["rows"]; //print_r ($max); } // Selects an item's index at random $rand = rand(1,$max)-1; print_r ($rand); $result = mysql_query("select * from quotes limit $rand, 1") or die ('Error: '.mysql_error()); if (!$result or mysql_num_rows($result)) { echo "Empty"; } else{ while ($row = mysql_fetch_array($result)) { $randomOutput = $row['cQuotes']; echo '<p>' . $randomOutput . '</p>'; } }

    Read the article

  • YUI DataTable - Howto have just one paginator?

    - by Rollo Tomazzi
    Hello, I'm using the YUI DataTable in a Grails 1.1 project using the Grails UI plugin 1.0.2 (YUI being 2.6.1). By default, the DataTable displays 2 paginators: one above and another one below the table. Looking up the YUI API documentation, I could see that I can pass an array of YUI containers as a config parameter but - what are the names of these containers? I've tried loooking at the HTML of the page using Firebug. The ID of the divs containing the paginators are: yui-dt0-paginator0 (above) and yui-dt0-paginator1 (below). If I use them to configure the containers for the navigator, then the navigator is just not displayed at all. Here's the relevant extract of the GSP page containing the Datatable element. <div class="body"> <h1>This is the List of Control Accounts</h1> <g:if test="${flash.message}"> <div class="message">${flash.message}</div> </g:if> <div class="yui-skin-sam"> <gui:dataTable controller="controlAccount" action="enhancedListDataTableJSON" columnDefs="[ [key:'id', label:'ID'], [key:'col1', label:'Col 1', sortable: true, resizeable: true], [key:'col2', label:'Col 2', sortable: true, resizeable: true] ]" sortedBy="col1" rowsPerPage="20" paginatorConfig="[ template:'{PreviousPageLink} {PageLinks} {NextPageLink} {CurrentPageReport}', pageReportTemplate:'{totalRecords} total accounts', alwaysVisible:true, containers:'yui-dt0-paginator1' ]" rowExpansion="true" /> </div> </div> Any help? Thanks! Rollo

    Read the article

  • ASP.NET Applications Requests/Sec suddenly jumps to a value of about 70 million/sec. on 8 core web

    - by Subhrajit Roy
    We are doing performance testing of an ASP.NET web application with VSTS 2008. We start with 2000 users and slowly ramp up to 5000 users (reaches this user load at around 2.5 hours after the tests start, after this we stay at this user load). The total test duration is of about 6 hours During these runs we have found that the counter Requests/Sec (under category ASP.NET applications) suddenly spikes to a values of 36-72 millions !!!. This keeps on happening intermittently i.e we see this issue once in every 3 performance runs that we give on the same application. In our testing environment we have 4 web servers and interestingly enough we have found that this issue occurs only in the 8 core web servers. Summarizing ... Issue : The counter Requests/Sec (under category ASP.NET Applications) suddenly jumps to a value of about 70 million/sec. on 8 core web servers. This results in an increase in SQL server connections opened by the application. Response time goes for a toss. Error rates also show similar behaviour. However the counter ISAPI Extention Requests/sec does not show any abnormal increase. The graph of this counter almost overlaps with that of counter Requests/Sec till the time of the appearance of the spike.When the spike appears , this counter (ISAPI Extention Requests/sec) actually shows a drop. Test Settings : Performance test run with Visual Studio Team System 2008. Soak test run for 6 hours. Maximum user load 5000 users. This is load is attained at about 2.5 hours into the run and mainted for remaining duration.(i.e for around 3.5 more hrs) This issue is reproducible though happens intermittently. (i.e occurs one in three or four runs) Test Environment : Web site deployed on 4 Web Servers (Windows Server 2003). Of these 2 are 4 core machines and the remaining 2 are 8 core ones. .NET Framework 3.5 SP1 installed on all 4 web servers. Application hosted on IIS 6.0 run in Worker process isolation mode.

    Read the article

  • Making dtSearch highlight one hit per phrase, rather than one hit per word-in-a-phrase

    - by Chris
    I'm using dtSearch to highlight text search matches within a document. The code to do this, minus some details and cleanup, is roughly along these lines: SearchJob sj = new SearchJob(); sj.Request = "\"audit trail\""; // the user query sj.FoldersToSearch.Add(path_to_src_document); sj.Execute(); FileConverter fileConverter = new FileConverter(); fileConverter.SetInputItem(sj.Results, 0); fileConvert.BeforeHit = "<a name=\"HH_%%ThisHit%%\"/><b>"; fileConverter.AfterHit = "</b>"; fileConverter.Execute(); string myHighlightedDoc = fileConverter.OutputString; If I give dtSearch a quoted phrase query like "audit trail" then dtSearch will do hit highlighting like this: An <a name="HH_0"/><b>audit</b> <a name="HH_1"/><b>trail</b> is a fun thing to have an <a name="HH_2"/><b>audit</b> <a name="HH_last"/><b>trail</b> about! Note that each word of the phrase is highlighted separately. Instead, I would like phrases to get highlighted as whole units, like this: An <a name="HH_0"/><b>audit trail</b> is a fun thing to have an <a name="HH_last"/><b>audit trail</b> about! This would A) make highlighting look better, B) improve behavior of my javascript that helps users navigate from hit to hit, and C) give more accurate counts of total # hits. Is there good ways to make dtSearch highlight phrases this way?

    Read the article

  • Poor Ruby on Rails performance when using nested :include

    - by Jeremiah Peschka
    I have three models that look something like this: class Bucket < ActiveRecord::Base has_many :entries end class Entry < ActiveRecord::Base belongs_to :submission belongs_to :bucket end class Submission < ActiveRecord::Base has_many :entries belongs_to :user end class User < ActiveRecord::Base has_many :submissions end When I retrieve a collection of entries doing something like: @entries = Entry.find(:all, :conditions => ['entries.bucket_id = ?', @bucket], :include => :submission) The performance is pretty quick although I get a large number of extra queries because the view uses the Submission.user object. However, if I add the user to the :include statement, the performance becomes terrible and it takes over a minute to return a total of 50 entries and submissions spread across 5 users. When I run the associated SQL commands, they complete in well under a second. @entries = Entry.find(:all, :conditions => ['entries.bucket_id = ?', @bucket], :include => {:submission => :user}) Why would this second command have such terrible performance compared to the first?

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >