Search Results

Search found 5469 results on 219 pages for 'compare and swap'.

Page 198/219 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Exception using SQLiteDataReader

    - by galford13x
    I'm making a Custom SQLite Wrapper. This is meant to allow a presistent connection to a database. However, I receive an exception when calling this function twice. public Boolean DatabaseConnected(string databasePath) { bool exists = false; if (ConnectionOpen()) { this.Command.CommandText = string.Format(DATABASE_QUERY); using (reader = this.Command.ExecuteReader()) { while (reader.Read()) { if (string.Compare(reader[FILE_NAME_COL_HEADER].ToString(), databasePath, true) == 0) { exists = true; break; } } reader.Close(); } } return exists; } I use the above function to check if the database is currently open before executing a command or trying to open a database. The first time I execute the function, it executes with no issue. After that the reader = this.Command.ExecuteReader() throws an exception Object reference not set to an instance of an object. StackTrace: at System.Data.SQLite.SQLiteStatement.Dispose() at System.Data.SQLite.SQLite3.Reset(SQLiteStatement stmt) at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt) at System.Data.SQLite.SQLiteDataReader.NextResult() at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave) at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior) at System.Data.SQLite.SQLiteCommand.ExecuteReader() at EveTraderApi.Database.SQLDatabase.DatabaseConnected(String databasePath) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApi\Database\Database.cs:line 579 at EveTraderApi.Database.SQLDatabase.OpenSQLiteDB(String filename) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApi\Database\Database.cs:line 119 at EveTraderApiExample.Form1.CreateTableDataTypes() in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Form1.cs:line 89 at EveTraderApiExample.Form1.Button1_ExecuteCommand(Object sender, EventArgs e) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Form1.cs:line 35 at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at EveTraderApiExample.Program.Main() in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • OpenGL FrameBuffer Objects weird behavior

    - by Ben Jones
    My algorithm is this: Render the scene to a FBO with shadow mapping from multiple locations Render the scene to the screen with shadow mapping ...black magic that I still have to imlement... Combine the samples from step 1 with the image from step 2 I'm trying to debug steps 1 and 2 and am coming across STRANGE behavior. My algorithm for each shadow mapped pass is: render the scene to a FBO connected to a depth array texture from the POV of each light render the scene from the viewpoint and use vertex/frag shaders to compare the depths When I run my algorithm this way: render from point to FBO render from point to screen glutSwapBuffers() The normal vectors in the screen pass appear to be incorrect (inverted possibly). I'm pretty sure that's the issue because my diffuse lighting calculation is incorrect, but the material colors are correct, and the shadows appear in the correct places. So, it seems like the only thing that could be the culprit is the normals. However if I do render from point to FBO render from point to Screen glutSwapBuffers() //wrong here render from point to Screen glutSwapBuffers() the second pass is correct. I assume there's a problem with my framebuffer calls. Can anyone see what the problem is from the log below? Its from a bugle trace grepped for 'buffer' with a few edits to make it a little more clear. Thanks! [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfeb90 - { 1 }) [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfebac - { 2 }) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glDrawBuffer(GL_NONE) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) //start render to FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 2, 0) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, 3, 0) [INFO] trace.call: glDrawBuffer(GL_COLOR_ATTACHMENT0) //bind to the FBO attached to a depth tex array for shadows [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the FBO I want the shadow mapped image rendered to [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //draw geometry //draw to screen pass //again shadow mapping FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the screen [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //finished, swap buffers [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //INCORRECT OUTPUT //second try at render to screen: [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) draw geometry [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //correct output

    Read the article

  • Automatic incremental SQL Script generation for incremental, nightly builds when using Team Build in

    - by Steve Johnson
    hi all, hope that everybody here is OK. We are using VS 2008 as development tool, TFS 2008 as version control as well as build automation. Some of our developer use dbpro for databases changes and some use SQL Server management studio. I am trying to automate build for Web Application built using C# and VB.Net. Our scenario is such that we have a central database to which our web application connects. Whenever we supply our clients with a new functionality or a bug fix, we supply them incremental builds. The SQL script is checked into source control for every incremental build when they have made and tested there changes on our central DB Server. I want to generate Differential script that can be run at the client as an incremental update script. Now to come about it is a problem. Sometimes our developers tend to forget the database change-sets and the script in the source control is missing an SP or a two. Also, sometimes we need to insert default data into some of the tables that have strict stringent values and not test values. Like a table that contains Services provided by the panel, we add a new service name, signature, credentials and service address, etc etc in the ServiceTable. Besides this many other tables may have test data that may not be needed. If we use DataCompare, it will generate changeset for required data (important for client to enable certain services) and our test data that was added to the database as a result of our testing of the functionality or bug fix. Currently i am using SQLSchemaCompareTask (from Visual Studio 2008 Team Database Professional Power Tools API) in the TFSBuild.proj file of the build definition for TFS 2008. Using SQLSchemaCompareTask, the script generated contains database names like [dbo]. etc which are not desired as the script fails when run against SQL Server 2000 databses (Some of our client still use SQL Server 2000) databases as teh backend of the application. Also default data can't be generated by this process. To overcome this problem, i have to come up with a solution that can compare databases and generate script automatically that does not have to be manually reviewed again before being sent to the client. Please suggest effective methodology of such SQL script generation and suggest whether two different databases may be used or something ? Is there any toolkit or api that can enable build automation for SQL Server databases? Thank you all. Regards Steve

    Read the article

  • Page expired issue with back button and wicket SortableDataProvider and DataTable

    - by David
    Hi, I've got an issue with SortableDataProvider and DataTable in wicket. I've defined my DataTable as such: IColumn<Column>[] columns = new IColumn[9]; //column values are mapped to the private attributes listed in ColumnImpl.java columns[0] = new PropertyColumn(new Model("#"), "columnPosition", "columnPosition"); columns[1] = new PropertyColumn(new Model("Description"), "description"); columns[2] = new PropertyColumn(new Model("Type"), "dataType", "dataType"); Adding it to the table: DataTable<Column> dataTable = new DataTable<Column>("columnsTable", columns, provider, maxRowsPerPage) { @Override protected Item<Column> newRowItem(String id, int index, IModel<Column> model) { return new OddEvenItem<Column>(id, index, model); } }; My data provider: public class ColumnSortableDataProvider extends SortableDataProvider<Column> { private static final long serialVersionUID = 1L; private List list = null; public ColumnSortableDataProvider(Table table, String sortProperty) { this.list = Arrays.asList(table.getColumns().toArray(new Column[0])); setSort(sortProperty, true); } public ColumnSortableDataProvider(List list, String sortProperty) { this.list = list; setSort(sortProperty, true); } @Override public Iterator iterator(int first, int count) { /* first - first row of data count - minimum number of elements to retrieve So this method returns an iterator capable of iterating over {first, first+count} items */ Iterator iterator = null; try { if(getSort() != null) { Collections.sort(list, new Comparator() { private static final long serialVersionUID = 1L; @Override public int compare(Column c1, Column c2) { int result=1; PropertyModel<Comparable> model1= new PropertyModel<Comparable>(c1, getSort().getProperty()); PropertyModel<Comparable> model2= new PropertyModel<Comparable>(c2, getSort().getProperty()); if(model1.getObject() == null && model2.getObject() == null) result = 0; else if(model1.getObject() == null) result = 1; else if(model2.getObject() == null) result = -1; else result = ((Comparable)model1.getObject()).compareTo(model2.getObject()); result = getSort().isAscending() ? result : -result; return result; } }); } if (list.size() (first+count)) iterator = list.subList(first, first+count).iterator(); else iterator = list.iterator(); } catch (Exception e) { e.printStackTrace(); } return iterator; } The problem is the following: - I click a column header to sort by that column. - I navigate to a different page - I click Back (or Forward if I do the opposite scenario) - Page has expired. It'd be nice to generate the page using PageParameters but I somehow need to intercept the sort event to do so. Any pointers would be greatly appreciated. Thanks a ton!! David

    Read the article

  • PHP: Can pcntl_alarm() and socket_select() peacefully exist in the same thread?

    - by DWilliams
    I have a PHP CLI script mostly written that functions as a chat server for chat clients to connect to (don't ask me why I'm doing it in PHP, thats another story haha). My script utilizes the socket_select() function to hang execution until something happens on a socket, at which point it wakes up, processes the event, and waits until the next event. Now, there are some routine tasks that I need performed every 30 seconds or so (check of tempbanned users should be unbanned, save user databases, other assorted things). From what I can tell, PHP doesn't have very great multi-threading support at all. My first thought was to compare a timestamp every time the socket generates an event and gets the program flowing again, but this is very inconsistent since the server could very well sit idle for hours and not have any of my cleanup routines executed. I came across the PHP pcntl extensions, and it lets me use assign a time interval for SIGALRM to get sent and a function get executed every time it's sent. This seems like the ideal solution to my problem, however pcntl_alarm() and socket_select() clash with each other pretty bad. Every time SIGALRM is triggered, all sorts of crazy things happen to my socket control code. My program is fairly lengthy so I can't post it all here, but it shouldn't matter since I don't believe I'm doing anything wrong code-wise. My question is: Is there any way for a SIGALRM to be handled in the same thread as a waiting socket_select()? If so, how? If not, what are my alternatives here? Here's some output from my program. My alarm function simply outputs "Tick!" whenever it's called to make it easy to tell when stuff is happening. This is the output (including errors) after allowing it to tick 4 times (there were no actual attempts at connecting to the server despite what it says): [05-28-10 @ 20:01:05] Chat server started on 192.168.1.28 port 4050 [05-28-10 @ 20:01:05] Loaded 2 users from file PHP Notice: Undefined offset: 0 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112 PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116 [05-28-10 @ 20:01:15] Tick! PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126 [05-28-10 @ 20:01:25] Tick! PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129 [05-28-10 @ 20:01:25] Accepting socket connection from PHP Notice: Undefined offset: 1 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112 PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116 [05-28-10 @ 20:01:35] Tick! PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126 [05-28-10 @ 20:01:45] Tick! PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129 [05-28-10 @ 20:01:45] Accepting socket connection from PHP Notice: Undefined offset: 2 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

  • Asp.net MVC VirtualPathProvider views parse error

    - by madcapnmckay
    Hi, I am working on a plugin system for Asp.net MVC 2. I have a dll containing controllers and views as embedded resources. I scan the plugin dlls for controller using StructureMap and I then can pull them out and instantiate them when requested. This works fine. I then have a VirtualPathProvider which I adapted from this post public class AssemblyResourceProvider : VirtualPathProvider { protected virtual string WidgetDirectory { get { return "~/bin"; } } private bool IsAppResourcePath(string virtualPath) { var checkPath = VirtualPathUtility.ToAppRelative(virtualPath); return checkPath.StartsWith(WidgetDirectory, StringComparison.InvariantCultureIgnoreCase); } public override bool FileExists(string virtualPath) { return (IsAppResourcePath(virtualPath) || base.FileExists(virtualPath)); } public override VirtualFile GetFile(string virtualPath) { return IsAppResourcePath(virtualPath) ? new AssemblyResourceVirtualFile(virtualPath) : base.GetFile(virtualPath); } public override CacheDependency GetCacheDependency(string virtualPath, IEnumerable virtualPathDependencies, DateTime utcStart) { return IsAppResourcePath(virtualPath) ? null : base.GetCacheDependency(virtualPath, virtualPathDependencies, utcStart); } } internal class AssemblyResourceVirtualFile : VirtualFile { private readonly string path; public AssemblyResourceVirtualFile(string virtualPath) : base(virtualPath) { path = VirtualPathUtility.ToAppRelative(virtualPath); } public override Stream Open() { var parts = path.Split('/'); var resourceName = Path.GetFileName(path); var apath = HttpContext.Current.Server.MapPath(Path.GetDirectoryName(path)); var assembly = Assembly.LoadFile(apath); return assembly != null ? assembly.GetManifestResourceStream(assembly.GetManifestResourceNames().SingleOrDefault(s => string.Compare(s, resourceName, true) == 0)) : null; } } The VPP seems to be working fine also. The view is found and is pulled out into a stream. I then receive a parse error Could not load type 'System.Web.Mvc.ViewUserControl<dynamic>'. which I can't find mentioned in any previous example of pluggable views. Why would my view not compile at this stage? Thanks for any help, Ian EDIT: Getting closer to an answer but not quite clear why things aren't compiling. Based on the comments I checked the versions and everything is in V2, I believe dynamic was brought in at V2 so this is fine. I don't even have V3 installed so it can't be that. I have however got the view to render, if I remove the <dynamic> altogether. So a VPP works but only if the view is not strongly typed or dynamic This makes sense for the strongly typed scenario as the type is in the dynamically loaded dll so the viewengine will not be aware of it, even though the dll is in the bin. Is there a way to load types at app start? Considering having a go with MEF instead of my bespoke Structuremap solution. What do you think?

    Read the article

  • Does my fat-client application belong in the MVC pattern?

    - by boatingcow
    The web-based application I’m currently working on is growing arms and legs! It’s basically an administration system which helps users to keep track of bookings, user accounts, invoicing etc. It can also be accessed via a couple of different websites using a fairly crude API. The fat-client design loosely follows the MVC pattern (or perhaps MVP) with a php/MySQL backend, Front Controller, several dissimilar Page Controllers, a liberal smattering of object-oriented and procedural Models, a confusing bunch of Views and templates, some JavaScripts, CSS files and Flash objects. The programmer in me is a big fan of the principle of “Separation of Concerns” and on that note, I’m currently trying to figure out the best way to separate and combine the various concerns as the project grows and more people contribute to it. The problem we’re facing is that although JavaScript (or Flash with ActionScript) is normally written with the template, hence part of the View and decoupled from the Controller and Model, we find that it actually encompasses the entire MVC pattern... Swap an image with an onmouseover event - that’s Behaviour. Render a datagrid - we’re manipulating the View. Send the result of reordering a list via AJAX - now we’re in Control. Check a form field to see if an email address is in a valid format - we’re consulting the Model. Is it wise to let the database people write up the validation Model with jQuery? Can the php programmers write the necessary Control structures in JavaScript? Can the web designers really write a functional AJAX form for their View? Should there be a JavaScript overlord for every project? If the MVC pattern could be applied to the people instead of the code, we would end up with this: Model - the database boffins - “SELECT * FROM mind WHERE interested IS NULL” Control - pesky programmers - “class Something extends NothingAbstractClass{…}” View - traditionally the domain of the graphic/web designer - “” …and a new layer: Behaviour - interaction and feedback designer - “CSS3 is the new black…” So, we’re refactoring and I’d like to stick to best practice design, but I’m not sure how to proceed. I don’t want to reinvent the wheel, so would anyone have any hints or tips as to what pattern I should be looking at or any code samples from someone who’s already done the dirty work? As the programmer guy, how can I rewrite the app for backend and front end whilst keeping the two separate? And before you ask, yes I’ve looked at Zend, CodeIgnitor, Symfony, etc., and no, they don’t seem to cross the boundary between server logic and client logic!

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • Moving from Linear Probing to Quadratic Probing (hash collisons)

    - by Nazgulled
    Hi, My current implementation of an Hash Table is using Linear Probing and now I want to move to Quadratic Probing (and later to chaining and maybe double hashing too). I've read a few articles, tutorials, wikipedia, etc... But I still don't know exactly what I should do. Linear Probing, basically, has a step of 1 and that's easy to do. When searching, inserting or removing an element from the Hash Table, I need to calculate an hash and for that I do this: index = hash_function(key) % table_size; Then, while searching, inserting or removing I loop through the table until I find a free bucket, like this: do { if(/* CHECK IF IT'S THE ELEMENT WE WANT */) { // FOUND ELEMENT return; } else { index = (index + 1) % table_size; } while(/* LOOP UNTIL IT'S NECESSARY */); As for Quadratic Probing, I think what I need to do is change how the "index" step size is calculated but that's what I don't understand how I should do it. I've seen various pieces of code, and all of them are somewhat different. Also, I've seen some implementations of Quadratic Probing where the hash function is changed to accommodated that (but not all of them). Is that change really needed or can I avoid modifying the hash function and still use Quadratic Probing? EDIT: After reading everything pointed out by Eli Bendersky below I think I got the general idea. Here's part of the code at http://eternallyconfuzzled.com/tuts/datastructures/jsw_tut_hashtable.aspx: 15 for ( step = 1; table->table[h] != EMPTY; step++ ) { 16 if ( compare ( key, table->table[h] ) == 0 ) 17 return 1; 18 19 /* Move forward by quadratically, wrap if necessary */ 20 h = ( h + ( step * step - step ) / 2 ) % table->size; 21 } There's 2 things I don't get... They say that quadratic probing is usually done using c(i)=i^2. However, in the code above, it's doing something more like c(i)=(i^2-i)/2 I was ready to implement this on my code but I would simply do: index = (index + (index^index)) % table_size; ...and not: index = (index + (index^index - index)/2) % table_size; If anything, I would do: index = (index + (index^index)/2) % table_size; ...cause I've seen other code examples diving by two. Although I don't understand why... 1) Why is it subtracting the step? 2) Why is it diving it by 2?

    Read the article

  • How to pass the 'argument-line' of one PowerShell function to another?

    - by jwfearn
    I'm trying to write some PowerShell functions that do some stuff and then transparently call through to existing built-in function. I want to pass along all the arguments untouched. I don't want to have to know any details of the arguments. I tired using 'splat' to do this with @args but that didn't work as I expected. In the example below, I've written a toy function called myls which supposed to print hello! and then call the same built-in function, Get-ChildItem, that the built-in alias ls calls with the rest of the argument line intact. What I have so far works pretty well: function myls { Write-Output "hello!" Invoke-Expression("Get-ChildItem "+$MyInvocation.UnboundArguments -join " ") } A correct version of myls should be able to handle being called with no arguments, with one argument, with named arguments, from a line containing multiple semi-colon delimited commands, and with variables in the arguments including string variables containing spaces. The tests below compare myls and the builtin ls: [NOTE: output elided and/or compacted to save space] PS> md C:\p\d\x, C:\p\d\y, C:\p\d\"jay z" PS> cd C:\p\d PS> ls # no args PS> myls # pass PS> cd .. PS> ls d # one arg PS> myls d # pass PS> $a="A"; $z="Z"; $y="y"; $jz="jay z" PS> $a; ls d; $z # multiple statements PS> $a; myls d; $z # pass PS> $a; ls d -Exclude x; $z # named args PS> $a; myls d -Exclude x; $z # pass PS> $a; ls d -Exclude $y; $z # variables in arg-line PS> $a; myls d -Exclude $y; $z # pass PS> $a; ls d -Exclude $jz; $z # variables containing spaces in arg-line PS> $a; myls d -Exclude $jz; $z # FAIL! Is there a way I can re-write myls to get the behavior I want?

    Read the article

  • Why touching "d_name" makes calls to readdir() fail?

    - by Sarah Mani
    Hi, I'm trying to write a little helper for Windows which eventually will accept a file extension as an argument and return the number of files of that kind in the current directory. To do so, I'm reading the file entries in the directories and after getting the extension I'd like to convert it to lowercase to compare it with the yet-to-add specified argument. When converting the extension to lowercase I found that touching even a duplicate string of the d_name variable will cause a strange behaviour, like no more calls to readdir are called. Here is the code I'm using right now (the commented code is preliminary) and outputs for a given directory: #include <ctype.h> #include <dirent.h> #include <stdio.h> #include <string.h> char * strrch(char *string, size_t elements, char character) { char *reverse = string + elements; while (--reverse != string) if (*reverse == character) return reverse; return NULL; } void test(char *string) { // Even being a duplicate will make it fail: char *str = strdup(string); printf("Strings: %s %s\n", string, str); *str = 'a'; printf("Strings: %s %s\n", string, str); //unsigned short int i = 0; //for (; str[i] != '\0', str++; i++) // str[i] = tolower((unsigned char) str[i]); //puts(str); } int main(int argc, char **argv) { DIR *directory; struct dirent *element; if (directory = opendir(".")) { while (element = readdir(directory)) test(strrch(element->d_name, element->d_namlen, '.')); closedir(directory); puts(NULL); } else puts("Couldn't open the directory.\n"); } Output without modifying the duplicate (modification and the second printf call commented): Strings: (null) (null) Strings: . . Strings: .exe .exe Strings: .pdf .pdf Strings: .c .c Strings: .ini .ini Strings: .pdf .pdf Strings: .pdf .pdf Strings: .pdf .pdf Strings: .flac .flac Strings: .FLAC .FLAC Strings: .lnk .lnk Strings: .URL .URL Output of the same directory (with the code above, with the 2 printfs): Strings: (null) (null) Is there anything wrong? Is it a compiler issue? I'm using GCC 4.4.3 in Windows (MinGW) right now. Thank you very much for your help. By the way, is there any other way to work with files and directories in a Windows environment not using the POSIX functions?

    Read the article

  • Can a View Controller manage more than 1 nib based view?

    - by Hugo Brynjar
    I have a VC controlling a screen of content that has 2 modes; a normal mode and an edit mode. Can I create a single VC with 2 views, each from separate nibs? In many situations on the iphone, you have a VC which controls an associated view. Then on a button press or other event, a new VC is loaded and its view becomes the top level view etc. But in this situation, I have 2 modes that I want to use the same VC for, because they are closely related. So I want a VC which can swap in/out 2 views. As per here: http://stackoverflow.com/questions/863321/iphone-how-to-load-a-view-using-a-nib-file-created-with-interface-builder/2683153#2683153 I have found that I can load a VC with an associated view from a nib and then later on load a different view from another nib and make that new view the active view. NSArray *nibObjects = [[NSBundle mainBundle] loadNibNamed:@"EditMode" owner:self options:nil]; UIView *theEditView = [nibObjects objectAtIndex:0]; self.editView = theEditView; [self.view addSubview:theEditView]; The secondary nib has outlets wired up to the VC like the primary nib. When the new nib is loaded, the outlets are all connected up fine and everything works nicely. Unfortunately when this edit view is then removed, there doesn't seem to be any elegant way of getting the outlets hooked up again to the (normal mode) view from the original nib. Nib loading and outlet setting seems a once only thing. So, if you want to have a VC that swaps in/out 2 views without creating a new VC, what are the options? 1) You can do everything in code, but I want to use nibs because it makes creating the UI simpler. 2) You have 1 nib for your VC and just hide/show elements using the hidden property of UIView and its subclasses. 3) You load a new nib as described above. This is fine for the new nib, but how do you sort the outlets when you go back to the original nib. 4) Give up and accept having a 1:1 between VCs and nibs. There is a nib for normal mode, a nib for edit mode and each mode has a VC that subclasses a common superclass. In the end, I went with 4) and it works, but requires a fair amount of extra work, because I have a model class that I instantiate in normal mode and then have to pass to the edit mode VC because both modes need access to the model. I'm also using NSTimer and have to start and stop the timer in each mode. It is because of all this shared functionality that I wanted a single VC with 2 nibs in the first place.

    Read the article

  • php if clause inside foreach not retrieving data correctly

    - by Mike
    Here's my issue: In my controller, I want to grab user input from a form. I then parse the input, and compare it to database values to ensure I'm grabbing the correct input. I simply want to match the user's answers to the question, grab the user ID, the question ID, and then determine if the answer applies to a multiple choice or checkbox question, or something else. I take those values and insert them into the answer table. Ignore the waiver stuff. I'll test that once I get the answers input correctly. // add answers and waiver consent records try { $answerArray = array(); $waiverArray = array(); // retrieve answers, waiver consents, and the question ID's from form object foreach ($formData as $key => $value) { $parts = explode("_", $key); if ($parts[0] == 'question') { array_push($answerArray, $value); } if ($parts[0] == 'waiverTitle') { array_push($waiverArray, $value); } } $questions = new Model_DbTable_Questions(); $questionResults = $questions->getQuestionResults($session->idEvent); foreach ( $questionResults as $qr ) { if ($qr ['questionType'] == 'multipleChoice' || $qr ['questionType'] == 'checkBox') { foreach ( $answerArray as $aa ) { $answerData = $answers->addAnswer ( $lastUserID, $qr ['idQuestion'], null, $aa ); echo count ( $answerData ) . ', ' . $qr ['questionType'] . ', ' . $aa . '<br />'; } } else { foreach ( $answerArray as $aa ) { $answerData = $answers->addAnswer ( $lastUserID, $qr ['idQuestion'], $aa, null ); echo count ( $answerData ) . ', ' . $qr ['questionType'] . ', ' . $aa . '<br />'; } } } } catch (Zend_Db_Statement_Exception $e) { $e->getMessage(); throw $e; } From my test data, I expect to get 2 records that match the multiple choice and checkbox criteria, and 1 record for text in the ELSE clause like this: 3, checkbox, 1 3, multipleChoice, 1 3, text, question_2 What I get is a 3x3 Cartesian product, 3 question elements each with the 3 possible answers like this output from the echo statements: 4, checkBox, 1 4, checkBox, 1 4, checkBox, question_2 4, multipleChoice, 1 4, multipleChoice, 1 4, multipleChoice, question_2 4, text, 1 4, text, 1 4, text, question_2 I've tried placing the IF clause inside the inner foreach, but I get the same results. I've been staring at this problem for way too long and cannot see what I'm doing wrong. Your kind assistance would be greatly appreciated. Please let me know if my request requires more clarification.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • T-SQL - Left Outer Joins - Filters in the where clause versus the on clause.

    - by Greg Potter
    I am trying to compare two tables to find rows in each table that is not in the other. Table 1 has a groupby column to create 2 sets of data within table one. groupby number ----------- ----------- 1 1 1 2 2 1 2 2 2 4 Table 2 has only one column. number ----------- 1 3 4 So Table 1 has the values 1,2,4 in group 2 and Table 2 has the values 1,3,4. I expect the following result when joining for Group 2: `Table 1 LEFT OUTER Join Table 2` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL `Table 2 LEFT OUTER Join Table 1` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 The only way I can get this to work is if I put a where clause for the first join: PRINT 'Table 1 LEFT OUTER Join Table 2, with WHERE clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL and a filter in the ON for the second: PRINT 'Table 2 LEFT OUTER Join Table 1, with ON clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Can anyone come up with a way of not using the filter in the on clause but in the where clause? The context of this is I have a staging area in a database and I want to identify new records and records that have been deleted. The groupby field is the equivalent of a batchid for an extract and I am comparing the latest extract in a temp table to a the batch from yesterday stored in a partioneds table, which also has all the previously extracted batches as well. Code to create table 1 and 2: create table table1 (number int, groupby int) create table table2 (number int) insert into table1 (number, groupby) values (1, 1) insert into table1 (number, groupby) values (2, 1) insert into table1 (number, groupby) values (1, 2) insert into table2 (number) values (1) insert into table1 (number, groupby) values (2, 2) insert into table2 (number) values (3) insert into table1 (number, groupby) values (4, 2) insert into table2 (number) values (4) EDIT: A bit more context - depending on where I put the filter I different results. As stated above the where clause gives me the correct result in one state and the ON in the other. I am looking for a consistent way of doing this. Where - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL On - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number AND table1.groupby = 2 --****************************** WHERE table2.number IS NULL Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 1 1 NULL 2 2 NULL 1 2 NULL Where (table 2 this time) - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 On - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number --****************************** WHERE table1.number IS NULL AND table1.groupby = 2 Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- (0) rows returned

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Research

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involves the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image operations and seamless transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For carrying out research in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For computer vision research work, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your research, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your research group? Thanks in advance. Edit: As suggested, I've opened the question to both academic and non-academic computer vision/machine learning/pattern recognition researchers and groups.

    Read the article

  • Release Process Improvements

    - by wallismark
    The process of creating a new build and releasing it to production is a critical step in the SDLC but it is often left as an afterthought and varies greatly from one company to the next. I'm hoping people will share improvements they have made to this process in their organisation so we can all takes steps to 'reduce the pain'. So the question is, specify one painful/time consuming part of your release process and what did you do to improve it? My example: at a previous employer all developers made database changes on one common development database. Then when it came to release time, we used Redgate's SQL Compare to generate a huge script from the differences between the Dev and QA databases. This works reasonably well but the problems with this approach are:- ALL changes in the Dev database are included, some of which may still be 'works in progress'. Sometimes developers made conflicting changes (that were not noticed until the release was in production) It was a time consuming and manual process to create and validate the script (by validate I mean, try to weed out issues like problem 1 and 2). When there were problems with the script (eg the order in which things were run such as creating a record which relies on a foreign key record which is in the script but not yet run) it took time to 'tweak' it so it ran smoothly. It's not an ideal scenario for Continuous Integration. So the solution was:- Enforce a policy of all changes to the database must be scripted. A naming convention was important for ensuring the correct running order of the scripts. Create/Use a tool to run the scripts at release time. Developers had their own copy of the database do develop against (so there was no more 'stepping on each others toes') The next release after we started this process was much faster with fewer problems, indeed the only problems found were due to people 'breaking the rules', eg not creating a script. Once the issues with releasing to QA were fixed, when it came time to release to production it was very smooth. We applied a few other changes (like introducing CI) but this was the most significant, overall we reduced release time from around 3 hours down to a max of 10-15 minutes.

    Read the article

  • What are five things you hate about your favorite language?

    - by brian d foy
    There's been a cluster of Perl-hate on Stackoverflow lately, so I thought I'd bring my "Five things you hate about your favorite language" question to StackOverflow. Take your favorite language and tell me five things you hate about it. Those might be things that just annoy you, admitted design flaws, recognized performance problems, or any other category. You just have to hate it, and it has to be your favorite language. Don't compare it to another language, and don't talk about languages that you already hate. Don't talk about the things you like in your favorite language. I just want to hear the things that you hate but tolerate so you can use all of the other stuff, and I want to hear it about the language you wished other people would use. I ask this whenever someone tries to push their favorite language on me, and sometimes as an interview question. If someone can't find five things to hate about his favorite tool, he don't know it well enough to either advocate it or pull in the big dollars using it. He hasn't used it in enough different situations to fully explore it. He's advocating it as a culture or religion, which means that if I don't choose his favorite technology, I'm wrong. I don't care that much which language you use. Don't want to use a particular language? Then don't. You go through due diligence to make an informed choice and still don't use it? Fine. Sometimes the right answer is "You have a strong programming team with good practices and a lot of experience in Bar. Changing to Foo would be stupid." This is a good question for code reviews too. People who really know a codebase will have all sorts of suggestions for it, and those who don't know it so well have non-specific complaints. I ask things like "If you could start over on this project, what would you do differently?" In this fantasy land, users and programmers get to complain about anything and everything they don't like. "I want a better interface", "I want to separate the model from the view", "I'd use this module instead of this other one", "I'd rename this set of methods", or whatever they really don't like about the current situation. That's how I get a handle on how much a particular developer knows about the codebase. It's also a clue about how much of the programmer's ego is tied up in what he's telling me. Hate isn't the only dimension of figuring out how much people know, but I've found it to be a pretty good one. The things that they hate also give me a clue how well they are thinking about the subject.

    Read the article

  • Streaming webcam video in Flash using MP4 encoding

    - by Herms
    One of the features of the Flash app I'm working on is to be able to stream a webcam to others. We're just using the built-in webcam support in Flash and sending it through FMS. We've had some people ask for higher quality video, but we're already using the highest quality setting we can in Flash (setting quality to 100%). My understanding is that in the newer flash players they added support for MPEG-4 encoding for the videos. I created a simple test Flex app to try and compare the video quality of the MP4 vs FLV encodings. However, I can't seem to get MP4 to work at all. According to the Flex documentation the only thing I need to do to use MP4 instead of FLV is prepend "mp4:" to the name of the stream when calling publish: Specify the stream name as a string with the prefix mp4: with or without the filename extension. The prefix indicates to the server that the file contains H.264-encoded video and AAC-encoded audio within the MPEG-4 Part 14 container format. When I try this nothing happens. I don't get any events raised on the client side, no exceptions thrown, and my logging on the server side doesn't show any streams starting. Here's the relevant code: // These are all defined and created within the class. private var nc:NetConnection; private var sharing:Boolean; private var pubStream:NetStream; private var format:String; private var streamName:String; private var camera:Camera; // called when the user clicks the start button private function startSharing():void { if (!nc.connected) { return; } if (sharing) { return; } if(pubStream == null) { pubStream = new NetStream(nc); pubStream.attachCamera(camera); } startPublish(); sharing = true; } private function startPublish():void { var name:String; if (this.format == "mp4") { name = "mp4:" + streamName; } else { name = streamName; } //pubStream.publish(name, "live"); pubStream.publish(name, "record"); }

    Read the article

  • crash in calloc

    - by mmd
    I'm trying to debug a program I wrote. I ran it inside gdb and I managed to catch a SIGABRT from inside calloc(). I'm completely confused about how this can arise. Can it be a bug in gcc or even libc?? More details: My program uses OpenMP. I ran it through valgrind in single-threaded mode with no errors. I also use mmap() to load a 40GB file, but I doubt that is relevant. Inside gdb, I'm running with 30 threads. Several identical runs (same input&CL) finished correctly, until the problematic one that I caught. On the surface this suggests there might be a race condition of some type. However, the SIGABRT comes from calloc() which is out of my control. Here is some relevant gdb output: (gdb) info threads [...] * 11 Thread 0x7ffff0056700 (LWP 73449) 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 [...] (gdb) thread 11 [Switching to thread 11 (Thread 0x7ffff0056700 (LWP 73449))]#0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6a96085 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ad1fe7 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ad7916 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff6adb79f in _int_malloc () from /lib64/libc.so.6 #5 0x00007ffff6adbdd6 in calloc () from /lib64/libc.so.6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 #7 read_get_hit_list_per_strand (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/mapping.c:1046 #8 0x000000000041308a in read_get_hit_list (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1239 #9 handle_read (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1806 #10 0x0000000000404f35 in launch_scan_threads (.omp_data_i=<value optimized out>) at gmapper/gmapper.c:557 #11 0x00007ffff7230502 in ?? () from /usr/lib64/libgomp.so.1 #12 0x00007ffff6dfc851 in start_thread () from /lib64/libpthread.so.0 #13 0x00007ffff6b4a11d in clone () from /lib64/libc.so.6 (gdb) f 6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 286 res = calloc(size, 1); (gdb) p size $2 = 814080 (gdb) The function my_calloc() is just a wrapper, but the problem is not in there, as the real calloc() call looks legit. These are the limits set in the shell: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 2067285 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The program is not out of memory, it's using 41GB on a machine with 256GB available: $ top -b -n 1 | grep gmapper 73437 user 20 0 41.5g 16g 15g T 0.0 6.6 55:17.24 gmapper-ls $ free -m total used free shared buffers cached Mem: 258437 195567 62869 0 82 189677 -/+ buffers/cache: 5807 252629 Swap: 0 0 0 I compiled using gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), with flags -g -O2 -DNDEBUG -mmmx -msse -msse2 -fopenmp -Wall -Wno-deprecated -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS.

    Read the article

  • Codeigniter or PHP Amazon API help

    - by faya
    Hello, I have a problem searching through amazon web servise using PHP in my CodeIgniter. I get InvalidParameter timestamp is not in ISO-8601 format response from the server. But I don't think that timestamp is the problem,because I have tryed to compare with given date format from http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html and it seems its fine. Could anyone help? Here is my code: $private_key = 'XXXXXXXXXXXXXXXX'; // Took out real secret key $method = "GET"; $host = "ecs.amazonaws.com"; $uri = "/onca/xml"; $timeStamp = gmdate("Y-m-d\TH:i:s.000\Z"); $timeStamp = str_replace(":", "%3A", $timeStamp); $params["AWSAccesskeyId"] = "XXXXXXXXXXXX"; // Took out real access key $params["ItemPage"] = $item_page; $params["Keywords"] = $keywords; $params["ResponseGroup"] = "Medium2%2525COffers"; $params["SearchIndex"] = "Books"; $params["Operation"] = "ItemSearch"; $params["Service"] = "AWSECommerceService"; $params["Timestamp"] = $timeStamp; $params["Version"] = "2009-03-31"; ksort($params); $canonicalized_query = array(); foreach ($params as $param=>$value) { $param = str_replace("%7E", "~", rawurlencode($param)); $value = str_replace("%7E", "~", rawurlencode($value)); $canonicalized_query[] = $param. "=". $value; } $canonicalized_query = implode("&", $canonicalized_query); $string_to_sign = $method."\n\r".$host."\n\r".$uri."\n\r".$canonicalized_query; $signature = base64_encode(hash_hmac("sha256",$string_to_sign, $private_key, True)); $signature = str_replace("%7E", "~", rawurlencode($signature)); $request = "http://".$host.$uri."?".$canonicalized_query."&Signature=".$signature; $response = @file_get_contents($request); if ($response === False) { return "response fail"; } else { $parsed_xml = simplexml_load_string($response); if ($parsed_xml === False) { return "parse fail"; } else { return $parsed_xml; } } P.S. - Personally I think that something is wrong in the generation of the from the $string_to_sign when hashing it.

    Read the article

  • How to code a keyboard button to switch between 2 modes?

    - by le.shep20
    Hi! i'm doing a project, i'm not going to details but i will simplify my idea, i'm using Morse Code ( dot and dash) and i have 2 methods: convert_MorseToChar() and Convert_MorseTonum() in the convert_MorseToChar() method there is swich to compare the input from a user which will be Morse codes and mapping it to characters: private String convert_MorseToChar(ref string Ch) { switch (Ch) { Case ".-": MorsetoChar = "a" break; Case "-...": MorsetoChar = "b" break; Case "-.-.": MorsetoChar = "c" break; Case "-..": MorsetoChar = "d" break; Case ".": MorsetoChar = "e" break; } } and the other method Convert_MorseToNum(), ues the SAME combinations of Morse codes but mapping them to numbers: private String Convert_MorseToNum(ref string Ch) { switch (Ch) { Case ".-": MorsetoChar = "1" break; Case "-...": MorsetoChar = "2" break; Case "-.-.": MorsetoChar = "3" break; Case "-..": MorsetoChar = "4" break; Case ".": MorsetoChar = "5" break; } } now the senario is: there are 2 Textbox, one the user will write Morse codes in it and the other is for the output. The user will write dot "." and dash "-" from the keyboard and press Enter then the program will go to ONE of the 2 methods to convert the Morse codes. Now what tells the program where to go to convert?? my question is: I want to create mode key to swich between 2 modes: MorseTochar and MorseToNum. i want the down arrow key to act like a mode, when a user press the down arrow then it the program will be in MorseToChar mode, when ever the user input the program directly use the method convert_MorseToChar to convert to characters. and when the user press the down arrow agian, the prohram will swich to MorseToNum mode here when ever the user input as morsecode, the program will directly use the method Convert_MorseToNum() to convert to numbers. HOW I CAN DO THAT Pleaaaas!!! help me! Please excuse my English, English is not my native language :)

    Read the article

  • Can you find a pattern to sync files knowing only dates and filenames?

    - by Robert MacLean
    Imagine if you will a operating system that had the following methods for files Create File: Creates (writes) a new file to disk. Calling this if a file exists causes a fault. Update File: Updates an existing file. Call this if a file doesn't exist causes a fault. Read File: Reads data from a file. Enumerate files: Gets all files in a folder. Files themselves in this operating system only have the following meta data: Created Time: The original date and time the file was created, by the Create File method. Modified Time: The date and time the file was last modified by the Update File method. If the file has never been modified, this will equal the Create Time. You have been given the task of writing an application which will sync the files between two directories (lets call them bill and ted) on a machine. However it is not that simple, the client has required that The application never faults (see methods above). That while the application is running the users can add and update files and those will be sync'd next time the application runs. Files can be added to either the ted or bill directories. File names cannot be altered. The application will perform one sync per time it is run. The application must be almost entirely in memory, in other words you cannot create a log of filenames and write that to disk and then check that the next time. The exception to point 6 is that you can store date and times between runs. Each date/time is associated with a key labeled A through J (so you have 10 to use) so you can compare keys between runs. There is no way to catch exceptions in the application. Answer will be accepted based on the following conditions: First answer to meet all requirements will be accepted. If there is no way to meet all requirements, the answer which ensures the smallest amount of missed changes per sync will be accepted. A bounty will be created (100 points) as soon as possible for the prize. The winner will be selected one day before the bounty ends. Please ask questions in the comments and I will gladly update and refine the question on those.

    Read the article

  • Revision control for writing programming lessons

    - by Dietrich Epp
    I'd like to write a series programming lessons that guide programmers to build a certain kind of program. After each lesson, I'd like to provide sample code that implements what that lesson covered, and the next lesson would use that code as a starting point. Right now I'm using Git to keep track of the code from lesson to lesson. Each lesson has its own branch. lesson1: A--B--C \ lesson2: D--E--F \ lesson3: G--H--I However, suppose that now I want to make it easier on the Windows programmers using my lessons, so I add a Visual Studio project to lesson 1 and then merge it into lessons 2 and 3. lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K \ \ lesson3: G--H--I--L And then someone points out a bug in lesson 2 that causes crashes on certain systems. (This diagram is where I am right now, and I'm having doubts about continuing along this path.) lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K--M \ \ \ lesson3: G--H--I--L--N Here are the problems I imagine having: If I had many lessons, and I fix something in lesson 1, am I going to have to spend fifteen minutes or more just merging that one simple change? I know I'll probably have to test all of those lessons again, but I can put that off. When I make a bunch of changes to various lessons on one computer, how do I pull all of the branches at the same time? If I decide to publish these lessons, I'd like a way to tag all of the branches to correspond with what I publish. I figure I'll just need to tag each branch separately, but it would be nice if there were a better way. When I look at the history, I imagine becoming terribly confused about what I've done. Compare the above diagram to a hypothetical diagram below, where I use rebase instead of merge (and rebase has its own problems): lesson1: A--B--C--J \ lesson2: D2--E2--F2--M \ lesson3: G2--H2--I2 Do any of you have experience working with a project like this? Should I consider using a different VCS, such as Darcs? (Note: it would be a real pain to use centralized VCS, so don't suggest one of those unless the benefits are clear.) Should I consider writing plugins or extra tools for a VCS (such as a "meta tag" which tags several branches)?

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >