Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 733/1024 | < Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >

  • sproutcore vs javascriptMVC for web app development

    - by swami
    Hi, I want to use a javascript framework with MVC for a complex web application (which will be one of a set of related apps and pages) for an intranet in a digital archives. I have been looking at SproutCore and JavascriptMVC. I want to choose one framework and stick with it. Does anybody know what the distinguishing features are when comparing these two? I want something that is simple, straightforward that I can customize/hack easily, and that doesn't get in my way too much, but that at the same time gives me a basis for keeping my code nicely organized, and event-driven. I also plan on using jquery substantially. I know sproutcore is backed by Apple, and looks like it is getting more popular by the day, and it has a nice green website :), whereas JavascriptMVC looks less professional, with less of a following and less momentum behind it. I've done the tutorials for both and I was impressed by SproutCore more (in the JMVC tutorial you don't really do anything substantial) - but somewhere in the back of my mind I feel that JMVC might just be better because it doesn't try and do too much - it just gives you MVC functionality based on a couple of jquery plugins, and you can use jquery for everything else, so its flexible. Whereas SproutCore seems to have more of its own API etc... which is also nice in a way... but then you're kind of stuck within that.... hmmm I'm confused :). Any thoughts would be much appreciated.

    Read the article

  • How to pass additional convert options to paperclip on Heroku?

    - by Yuri
    UPD class User < ActiveRecord::Base Paperclip.options[:swallow_stderr] = false has_attached_file :photo, :styles => { :square => "100%", :large => "100%" }, :convert_options => { :square => "-auto-orient -geometry 70X70#", :large => "-auto-orient -geometry X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' validates_attachment_size :photo, :less_than => 5.megabyte end Works great on local machine, but gives me an error on Heroku: There was an error processing the thumbnail for stream.20143 The thing is I want to auto-orient photos before resizing, so they resized properly. The only working variant now(thanks to jonnii) is resizing without auto-orient: ... as_attached_file :photo, :styles => { :square => "70X70#", :large => "X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' ... How to pass additional convert options to paperclip on Heroku?

    Read the article

  • storing user info/pass in web.config authentication

    - by Tomaszewski
    Hello, I am trying to write a simple internal app with some simple authentication. I'm also trying to make this quick and learn about the forms authentication via web.config. So i have my authentication working if I hard code my 'user name' and 'password' into C# code and do a simple conditional. However, I'm having a tough time storing the a user/pass to be checked against in the web.config file. The MSDN manual says to put this into the web.config: <authentication mode="Forms"> <forms loginUrl="login.aspx"> <credentials passwordFormat="SHA1"> <user name="user1" password="27CE4CA7FBF00685AF2F617E3F5BBCAFF7B7403C" /> <user name="user2" password="D108F80936F78DFDD333141EBC985B0233A30C7A" /> <user name="user3" password="7BDB09781A3F23885CD43177C0508B375CB1B7E9"/> </credentials> </forms> </authentication> However, the minute I add 'credentials' into the 'authentication' section, I get this error: Server Error in '/' Application. Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized element 'credentials'. Source Error: Line 44: <authentication mode="Forms"> Line 45: <forms loginUrl="login.aspx" /> Line 46: <credentials> Line 47: Line 48: </credentials> Source File: C:\inetpub\wwwroot\asp\projects\passwordCatalog\passwordCatalog\web.config Line: 46 So my question is, how and where would I add the following in the web.config file? <credentials passwordFormat="SHA1"> <user name="johndoe" password="mypass123" /> </credentials>

    Read the article

  • Is an LSA MSV1_0 subauthentication package needed for some impersonation use cases?

    - by Chris Sears
    Greetings, I'm working with a vendor who has implemented some code that uses a Windows LSA MSV1_0 subauthentication package (MSDN info if you're interested: http://msdn.microsoft.com/en-us/library/aa374786(VS.85).aspx ) and I'm trying to figure out if it's necessary. As far as I can tell, the subauthentication routine and filter allow for hooking or customizing the standard LSA MSV1_0 logon event processing. The issue is that I don't understand why the vendor's product would need these capabilities. I've asked them and they said they use it to perform impersonation. The product definitely does need to do impersonation, but based on my limited win32 knowledge, they could get the functionality they need using the normal auth APIs (LsaLogonUser, ImpersonateLoggedOnUser, etc) without the subauthentication package. Furthermore, I've worked with a number of similar products that all do impersonation, and this is the only one that's used a subauthentication package. If you're wondering why I would care, a previous version of the product had a bug in the subauthentication package dll that would cause lockups or bluescreens. That makes me rather nervous and has me questioning the use of such a low-level, kernel sensitive interface. I'd like to go back to the vendor and say "There's no way you could need an LSA subauth package for impersonation - take it out", but I'm not sure I understand the use cases and possible limitations of the standard win32 authentication/impersonation APIs well enough to make that claim definitively. So, to the win32 security gurus out there, is there any reason you would need an LSA MSV1_0 subauthentication package if all you were doing is impersonation? Thanks in advance for any thoughts!

    Read the article

  • Error: Only LDAP Connection Strings are Supported against Active Directory

    - by Brent Pabst
    I have the following ASP.NET Membership section defined in the Web.config file: <membership defaultProvider="AspNetActiveDirectoryMembershipProvider"> <providers> <clear/> <add connectionStringName="ADService" connectionUsername="umanage" connectionPassword="letmein" enablePasswordReset="true" enableSearchMethods="true" applicationName="uManage" clientSearchTimeout="30" serverSearchTimeout="30" name="AspNetActiveDirectoryMembershipProvider" type="System.Web.Security.ActiveDirectoryMembershipProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </providers> </membership> The Connection string looks like this: <add name="ADService" connectionString="ldap://familynet.local" /> Whenever I call the following code: Membership.GetAllUsers(); I get the following error: Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Only LDAP connection strings are supported against Active Directory and ADAM. I don't understand why the system is claiming the LDAP connection string is bad because it is in fact a valid LDAP string as specified by the MSDN documentation. http://msdn.microsoft.com/en-us/library/system.web.security.activedirectorymembershipprovider.aspx Any ideas?

    Read the article

  • Using ComplexType with ToList causes InvalidOperationException

    - by Chris Paynter
    I have this model namespace ProjectTimer.Models { public class TimerContext : DbContext { public TimerContext() : base("DefaultConnection") { } public DbSet<Project> Projects { get; set; } public DbSet<ProjectTimeSpan> TimeSpans { get; set; } } public class DomainBase { [Key] public int Id { get; set; } } public class Project : DomainBase { public UserProfile User { get; set; } public string Name { get; set; } public string Description { get; set; } public IList<ProjectTimeSpan> TimeSpans { get; set; } } [ComplexType] public class ProjectTimeSpan { public DateTime TimeStart { get; set; } public DateTime TimeEnd { get; set; } public bool Active { get; set; } } } When I try to use this action I get the exception The type 'ProjectTimer.Models.ProjectTimeSpan' has already been configured as an entity type. It cannot be reconfigured as a complex type. public ActionResult Index() { using (var db = new TimerContext()) { return View(db.Projects.ToList); } } The view is using the model @model IList<ProjectTimer.Models.Project> Can any one shine some light as to why this would be happening?

    Read the article

  • How to fix width of DIV that contains floated elements?

    - by joe
    I have a DIV container with several inner DIVs layed out by floating them all left. The inner DIVs may change width on certain events, and the containing DIV adjusts accordingly. I use float:left in the container to keep it shrunk to the width of the inner divs. I use float:left in the inner divs so the layout is clean even when their contents change. The catch is that I want the DIV container width and height to remain constant, UNLESS a particular event causes a change to the inner widths. Conceptually I want to use float on the inners for the layout benefit, but then I want to "fix" them so they don't float around. So if the container is 700px wide, I want it to remain so even if the user narrows the browser window. I'd like the container, and it's internal DIVs to just be clipped by the browser window. I sense this can all be done nicely in CSS, I just can't quite figure out how. I'm not averse to adding another container if necessary... Since the only desired layout changes are event-based, I am also willing to use a bit of JS. But is this necessary? (And I'm still not sure I know what to modify: container dimensions? inner floatiness? other?) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <style type="text/css"> #canvas { overflow:auto; /* for clearing floats */ } #container { float:left; /* so container shrinks around contained divs */ border:thin solid blue; } .inner { float:left; /* so inner elems line up nicely w/o saying fixed coords */ padding-top:8px; padding-bottom:4px; padding-left:80px; padding-right:80px; } #inner1 { background-color:#ffdddd; } #inner2 { background-color:#ddffdd; } #inner3 { background-color:#ddddff; } </style> </head> <body> <div id="canvas"> <div id="container"> <div id="inner1" class="inner"> inner 1 </div> <div id="inner2" class="inner"> inner 2 </div> <div id="inner3" class="inner"> inner 3 </div> </div> </div> cleared element </body> </html>

    Read the article

  • scraping text from multiple html files into a single csv file

    - by Lulu
    I have just over 1500 html pages (1.html to 1500.html). I have written a code using Beautiful Soup that extracts most of the data I need but "misses" out some of the data within the table. My Input: e.g file 1500.html My Code: #!/usr/bin/env python import glob import codecs from BeautifulSoup import BeautifulSoup with codecs.open('dump2.csv', "w", encoding="utf-8") as csvfile: for file in glob.glob('*html*'): print 'Processing', file soup = BeautifulSoup(open(file).read()) rows = soup.findAll('tr') for tr in rows: cols = tr.findAll('td') #print >> csvfile,"#".join(col.string for col in cols) #print >> csvfile,"#".join(td.find(text=True)) for col in cols: print >> csvfile, col.string print >> csvfile, "===" print >> csvfile, "***" Output: One CSV file, with 1500 lines of text and columns of data. For some reason my code does not pull out all the required data but "misses" some data, e.g the Address1 and Address 2 data at the start of the table do not come out. I modified the code to put in * and === separators, I then use perl to put into a clean csv file, unfortunately I'm not sure how to work my code to get all the data I'm looking for!

    Read the article

  • conditionals for C++ using MSBuild/vsbuild?

    - by redtuna
    I have a C++ project in Visual Studio 2008 and I'd like to be able to compile several versions from the command line, defining conditional variables (aka #define). If it were just a single file to compile I'd use something like cl /D, but this is complex enough that I would like to be able to use VS's other features like the build order etc. I've seen a similar question asked in stackoverflow and the answer was to use /p:DefineConstants="var1;var2". This doesn't seem to work with C++ though. The other problem with that answer is that it replaces the conditional variables, instead of adding to them. The vcproj files for C++ look quite different. If msbuild (or vsbuild) had a way to change Configurations/Tool[name="VCCLCompilerTool"] we'd be golden. But I haven't found such an option. The vcproj files are under source control so I'd rather not have a script mess with them. I've considered doubling the number of configurations (one with the #define, one without). That'd be annoying, and I'm especially unhappy with having to modify these configurations in tandem every time I need to modify anything there. A previous similar question found no solution. I'm hoping that has changed since? How would you go about building those variants (with and without define) from the command line? Thanks!

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more then one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 𝕥 𝟶 𠂊 You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them Notepad can't deal with them correctly (delete for example) File names editing in Window dialogs in broken All QT3 applications can't deal with them. StackOverflow seems to remove these characters if edited directly in as Unicode characters, and only seems to allow them as HTML Unicode escapes. So... This was very simple test. Do you think that UTF-16 should be considered harmful?

    Read the article

  • Making an AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request during an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • ASP.Net GridView UpdatePanel Paging Gives Error On Second Click

    - by joe
    I'm trying to implement a GridView with paging inside a UpdatePanel. Everything works great when I do my first click. The paging kicks in and the next set of data is loaded quickly. However, when I then try to click a link for another page of data, I get the following error: Sys.WebForms.PageRequestManagerServerErrorException: An unknown error occurred while processing the request on the server. The status code returned from the server was: 12030 aspx code <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <contenttemplate> <asp:GridView ID="GridView1" runat="server" CellPadding="2" AllowPaging="true" AllowSorting="true" PageSize="20" OnPageIndexChanging="GridView1_PageIndexChanging" OnSorting="GridView1_PageSorting" AutoGenerateColumns="False"> <Columns> <asp:BoundField DataField="ActivityLogID" HeaderText="Activity Log ID" SortExpression="ActivityLogID" /> <asp:BoundField DataField="ActivityDate" HeaderText="Activity Date" SortExpression="ActivityDate" /> <asp:BoundField DataField="ntUserID" HeaderText="NTUserID" SortExpression="ntUserID" /> <asp:BoundField DataField="ActivityStatus" HeaderText="Activity Status" SortExpression="ActivityStatus" /> </Columns> </asp:GridView> </contenttemplate> </asp:UpdatePanel> code behind private void bindGridView(string sortExp, string sortDir) { SqlCommand mySqlCommand = new SqlCommand(sSQL, mySQLconnection); SqlDataAdapter mySqlAdapter = new SqlDataAdapter(mySqlCommand); mySqlAdapter.Fill(dtDataTable); DataView myDataView = new DataView(); myDataView = dt.DefaultView; if (sortExp != string.Empty) { myDataView.Sort = string.Format("{0} {1}", sortExp, sortDir); } GridView1.DataSource = myDataView; GridView1.DataBind(); if (mySQLconnection.State == ConnectionState.Open) { mySQLconnection.Close(); } } protected void GridView1_PageIndexChanging(object sender, GridViewPageEventArgs e) { GridView1.PageIndex = e.NewPageIndex; bindGridView(); } protected void GridView1_PageSorting(object sender, GridViewSortEventArgs e) { bindGridView(e.SortExpression, sortOrder); } any clues on what is causing the error on the second click?

    Read the article

  • Missing ideas in programming language design

    - by meyka
    I wanted to try something new and so I designed some programming languages and wrote interpreters for them: A rather low-level, not very expressive language. (I didn't want to parse complex expressions right at the beginning) It featured: Variables (yay) Subroutines, with a call stack Basic arithmetic functions, basic string manipulation, ... Code in the language looks like this: set i 0 inc i print i Very, very basic you see. A more high-level language I decided to make it structured and so it featured things like if-else, while, functions, and so on. The stuff most programming languages have. Ended up like a unworthy Python clone, I hated that. A code-golf language Which ended up similar to J, golfcode, APL, etc. Nothing special As you can see: I don't lack the skills but the ideas. I can't figure out anything new, not even bad, unneccessary things, for my languages. - Do you know of some weird things I could implement in my languages, which don't try to make programming harder (like most esoteric languages) but funnier or more different from other languages? It can't be possible that every weird thing has been tried out so far, or?

    Read the article

  • error with "pmem.c" compiling linux source code for android

    - by Preetam
    I am compiling linux source code for android emulator. When i execute make command(for building and cross-compiling the linux source) i get the following error "pmem.c" file. root@ubuntu:~/common# make CHK include/linux/version.h CHK include/linux/utsrelease.h SYMLINK include/asm - include/asm-x86 CALL scripts/checksyscalls.sh CHK include/linux/compile.h CC drivers/misc/pmem.o drivers/misc/pmem.c:441: error: conflicting types for ‘phys_mem_access_prot’ /home/preetam/common/arch/x86/include/asm/pgtable.h:383: note: previous declaration of ‘phys_mem_access_prot’ was here drivers/misc/pmem.c: In function ‘flush_pmem_file’: drivers/misc/pmem.c:805: error: implicit declaration of function ‘dmac_flush_range’ drivers/misc/pmem.c: In function ‘pmem_setup’: drivers/misc/pmem.c:1265: error: implicit declaration of function ‘ioremap_cached’ drivers/misc/pmem.c:1266: warning: assignment makes pointer from integer without a cast make[2]: * [drivers/misc/pmem.o] Error 1 make[1]: [drivers/misc] Error 2 make: ** [drivers] Error 2 root@ubuntu:~/common# how to resolve this error. It seems that there may some problems in the "pmem.c" file and i'll have to choose different git repository. but that would be a very complex thing, as now i have already done most of the things till here. I might have to see correct version of this file. please someone tell what should i do? how to solve this errors. please help..thankyou!

    Read the article

  • Making a concurrent AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request concurrently with an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • Is there any danger in calling free() or delete instead of delete[]? [closed]

    - by Matt Joiner
    Possible Duplicate: ( POD )freeing memory : is delete[] equal to delete ? Does delete deallocate the elements beyond the first in an array? char *s = new char[n]; delete s; Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array? For more complex types, would delete call the destructor of objects beyond the first one? Object *p = new Object[n]; delete p; How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region? What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more. For primitive types, such as char, int, is there any difference between: int *p = new int[n]; delete p; delete[] p; free p; Except for the routes taken by the respective calls through the delete-free deallocation machinery?

    Read the article

  • Getting a job in the games industry as a developer, just knowing a game engine

    - by numerical25
    I recently enrolled in a community college for games developement. But I am skeptical about the curriculum. I have no experience in the gaming industry so I wouldn't be able to tell whether it's a good investment or not. So I am asking you. I don't want to get too much into the details of all the classes I am taking so I will try to be brief. By the time I graduate, I should have a understanding of how a game engine works. I will be working with the Unreal Engine to develop a Multiplayer game from scratch. So in the process of my final project, I will learn how to work within the Unreal Engine, learn Python and learn how to use its API to connect to a remote server and build game mechanics. Overall I will also recieve an associates degree in game development. I learn C++ but not C. The director said he was trying to implement C in the program as well. What I notice is I will not learn how to build a 3D game engine from scratch. They do not teach any artificial intelligence (AI). I will not learn how to work with the graphics card using a graphics API such as DirectX or OpenGL. I know building a game engine from scratch is a little complex, but at the same time the track is requiring me to take some advanced mathematics courses such as calculus and geometry 1 and 2. I also got to take a physics class. I just think that's a little much for just learning how to use the Unreal Engine but not actually build one or try to learn the anatomy of a games engine. Is this good enough to possibly land my a job in the industry? If I left anything out or was not detail, please feel free to ask more questions. Edit: I do learn data structures and algorithms.

    Read the article

  • scripsharp reference web service / strongly type to results model

    - by user175528
    With scriptsharp (script#) is it possible to get strong typing when calling a service defined in my web app? The only way I can see is to: 1 - use linked / shared files to shadow copy my results classes / domain models across into my script# lib 2 - replicate my model across in the script# lib and use automapper to validate? 3 - use some .tt to code gen? also, even if I can do this, how do I get around the auto camel-casing script# does, when my service result (asmx) wont do this? (so my JSON response will comback as UserMessage, script# will have changed that to userMessage) basically, what I am looking to use script# to achieve is better compile time support against our domain model when calling and processing services in javascript, so something like this: Scriptlet public static class MyScriptlet { public static void Main() { MyService.Service1("hello", ProcessResponse);} public static void ProcessResponse(MyService.Service1ResponseData resp) { jQuery.Select('#Message').Text(resp.UserMessage); jQuery.Select('#Detail').Text(resp.UserDetail); } Service (in our web app) public class MyService { public class Service1ResponseData { public string UserMessage {get;set;} public string UserDetail {get;set;} } public Service1ResponseData Service1(string user) { return new Service1ResponseData() { UserMessage:"hi",UserDetail:"some text"}; } }

    Read the article

  • Proper use of HTTP status codes in a "validation" server

    - by Romulo A. Ceccon
    Among the data my application sends to a third-party SOA server are complex XMLs. The server owner does provide the XML schemas (.xsd) and, since the server rejects invalid XMLs with a meaningless message, I need to validate them locally before sending. I could use a stand-alone XML schema validator but they are slow, mainly because of the time required to parse the schema files. So I wrote my own schema validator (in Java, if that matters) in the form of an HTTP Server which caches the already parsed schemas. The problem is: many things can go wrong in the course of the validation process. Other than unexpected exceptions and successful validation: the server may not find the schema file specified the file specified may not be a valid schema file the XML is invalid against the schema file Since it's an HTTP Server I'd like to provide the client with meaningful status codes. Should the server answer with a 400 error (Bad request) for all the above cases? Or they have nothing to do with HTTP and it should answer 200 with a message in the body? Any other suggestion? Update: the main application is written in Ruby, which doesn't have a good xml schema validation library, so a separate validation server is not over-engineering.

    Read the article

  • sencha dataitem datamap setItems

    - by user1795667
    I'm trying to follow the kitten example given here http://www.sencha.com/blog/dive-into-dataview-with-sencha-touch-2-beta-2#comment_form and I have complex components in which one of the property of my data is a list of objects. And I do find a method for setting a list of objects which is setItems however it does not seem to work. My object array is my model MyApp.Model.Sponsor. Could anyone suggest what I'm missing to get this working? Ext.define('MyListItem', { extend: 'Ext.dataview.component.DataItem', requires: ['Ext.Button','Ext.Img', 'MyApp.model.Sponsors', 'MyApp.model.Sponsor'], xtype: 'mylistitem', config: { sponsor: true, dataMap: { getSponsor: { setItems: 'sponsor' } } }, applySponsor: function(config) { // I put an alert here to see if I get getSponsor() but the object I get here is undefined alert(this.getSponsor()); return Ext.factory(config, MyApp.model.Sponsor, this.getSponsor()); }, updateSponsor: function(newNameButton, oldNameButton) { if (oldNameButton) { this.remove(oldNameButton); } if (newNameButton) { this.add(newNameButton); } }, onSponsorTap: function(button, e) { var sponsors = record.get('sponsor'); //my specific action } }); Ext.define('MyApp.model.Sponsors', { extend: 'Ext.data.Model', xtype:'Sponsors_m', config: { fields: [ {name: 'level', type: 'auto'}, {name: 'id', type: 'int'}, {name: 'sponsor', type: 'Sponsor'} ] } }); Ext.define('MyApp.model.Sponsor', { extend: 'Ext.data.Model', xtype:'Sponsor_m', config: { fields: [ {name: 'name', type: 'auto'}, {name: 'image', type: 'auto'}, {name: 'url', type: 'auto'}, {name: 'description', type: 'auto'} ] } });

    Read the article

  • Integration tests in Continuous Integration environment: Database and filesystem state

    - by dario_ramos
    I'm trying to implement automated integration tests for my application. It's a very complex monster. You could say that its database and part of the filesystem are part of its state, because it saves image files in the hard drive, and references to those in the DB. The software needs all those, in a coherent state, to work properly. Back to writing tests: To run any relevant test, I need some image files in the filesystem, and certain records filled in the database. I thought of putting all of these in a separate folder called TestEnvironmentData in the repository, and retrieving them from the Continuous Integration Server (Team City), but a colleague said the repo is quite full as it is, and that I should set up a special directory, and databases, only in the Continuous Integration server. I don't like that because the tests success depend on me manually mantaining stuff in the server, and restoring initial state before every test becomes cumbersome. What do you guys do when you need to write integration tests for an app like this? The main goal is having an automated test harness to approach a large scale refactoring. There's lots of spaghetti code and the app's current architecture is hardly unit testable, that's why I decided on integration tests first. Any alternative approach is welcome.

    Read the article

  • Sharing the model in MVP Winforms App

    - by Keith G
    I'm working on building up an MVP application (C# Winforms). My initial version is at http://stackoverflow.com/questions/1422343/ ... Now I'm increasing the complexity. I've broken out the code to handle two separate text fields into two view/presenter pairs. It's a trivial example, but it's to work out the details of multiple presenters sharing the same model. My questions are about the model: I am basically using a property changed event raised by the model for notifying views that something has changed. Is that a good approach? What if it gets to the point where I have 100 or 1000 properties? Is it still practical at that point? Is instantiating the model in each presenter with   NoteModel _model = NoteModel.Instance   the correct approach? Note that I do want to make sure all of the presenters are sharing the same data. If there is a better approach, I'm open to suggestions .... My code looks like this: NoteModel.cs public class NoteModel : INotifyPropertyChanged { private static NoteModel _instance = null; public static NoteModel Instance { get { return _instance; } } static NoteModel() { _instance = new NoteModel(); } private NoteModel() { Initialize(); } public string Filename { get; set; } public bool IsDirty { get; set; } public readonly string DefaultName = "Untitled.txt"; string _sText; public string TheText { get { return _sText; } set { _sText = value; PropertyHasChanged("TheText"); } } string _sMoreText; public string MoreText { get { return _sMoreText; } set { _sMoreText = value; PropertyHasChanged("MoreText"); } } public void Initialize() { Filename = DefaultName; TheText = String.Empty; MoreText = String.Empty; IsDirty = false; } private void PropertyHasChanged(string sPropName) { IsDirty = true; if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(sPropName)); } } public event PropertyChangedEventHandler PropertyChanged; } TextEditorPresenter.cs public class TextEditorPresenter { ITextEditorView _view; NoteModel _model = NoteModel.Instance; public TextEditorPresenter(ITextEditorView view)//, NoteModel model) { //_model = model; _view = view; _model.PropertyChanged += new PropertyChangedEventHandler(model_PropertyChanged); } void model_PropertyChanged(object sender, PropertyChangedEventArgs e) { if (e.PropertyName == "TheText") _view.TheText = _model.TheText; } public void TextModified() { _model.TheText = _view.TheText; } public void ClearView() { _view.TheText = String.Empty; } } TextEditor2Presenter.cs is essentially the same except it operates on _model.MoreText instead of _model.TheText. ITextEditorView.cs public interface ITextEditorView { string TheText { get; set; } } ITextEditor2View.cs public interface ITextEditor2View { string MoreText { get; set; } }

    Read the article

  • Firebird sequence-backed ID shorthand

    - by pilcrow
    What do others do to simplify the creation of simple, serial surrogate keys populated by a SEQUENCE (a.k.a. GENERATOR) in Firebird = 2.1? I finc the process comparatively arduous: For example, in PostgreSQL, I simply type: pg> CREATE TABLE tbl ( > id SERIAL NOT NULL PRIMARY KEY, > ... In MySQL, I simply type: my> CREATE TABLE tbl ( > id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, > ... But in Firebird I type: fb> CREATE TABLE tbl ( > id BIGINT NOT NULL PRIMARY KEY, > ... fb> CREATE SEQUENCE tbl_id_seq; fb> SET TERM !!; > CREATE TRIGGER tbl_id_trg FOR tbl > ACTIVE BEFORE INSERT POSITION 0 > AS > BEGIN > IF ((new.id IS NULL) OR (new.id <= 0)) THEN > BEGIN > new.id = GEN_ID(tbl_id_seq, 1); > END > END !! > SET TERM ; !! ... and I get pretty bored by the time I reach trigger definition. However, I routinely make SEQUENCE-backed ID fields for temporary, developement and throw-away tables. What do others do to simplify this? Work with an IDE? Run a pre-processing, in-house perl script over the DDL file? Etc.

    Read the article

< Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >