Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 784/956 | < Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • Open XML document ContentControls problem with signed id's

    - by willvv
    I have an application that generates Open XML documents with Content Controls. To create a new Content Control I use Interop and the method ContentControls.Add. This method returns an instance of the added Content Control. I have some logic that saves the id of the Content Control to reference it later, but in some computers I've been having a weird problem. When I access the ID property of the Content Control I just created, it returns a string with the numeric id, the problem is that when this value is too big, after I save the document, if I look through the document.xml in the generated document, the <w:id/> element of the <w:sdtPr/> element has a negative value, that is the signed equivalent of the value I got from the Id property of the generated control. For example: var contentControl = ContentControls.Add(...); var contentControlId = contentControl.ID; // the value of contentControlId is "3440157266" If I save the document and open it in the Package Explorer, the Id of the Content Control is "-854810030" instead of "3440157266". What have I figured out is this: ((int)uint.Parse("3440157266")).ToString() returns "-854810030" Any idea of why this happens? This issue is hard to replicate because I don't control the Id of the generated controls, the Id is automatically generated by the Interop libraries.

    Read the article

  • PInvokeStackImbalance -- C# with offreg.dll ( windows ddk7 )

    - by user301185
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempted to create the hive using ORCreateHive, I receive the following error: "Managed Debugging Assistant 'PInvokeStackImbalance' has detected a problem. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature." Here is the offreg.h file containing ORCreateHive: typedef PVOID ORHKEY; typedef ORHKEY *PORHKEY; VOID ORAPI ORGetVersion( __out PDWORD pdwMajorVersion, __out PDWORD pdwMinorVersion ); DWORD ORAPI OROpenHive ( __in PCWSTR lpHivePath, __out PORHKEY phkResult ); DWORD ORAPI ORCreateHive ( __out PORHKEY phkResult ); DWORD ORAPI ORCloseHive ( __in ORHKEY Handle ); The following is my C# code attempting to call the .dll and create the pointer for future use. using System.Runtime.InteropServices; namespace WindowsFormsApplication6 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORCreateHive", SetLastError=true, CallingConvention = CallingConvention.StdCall)] public static extern IntPtr ORCreateHive2(); private void button1_Click(object sender, EventArgs e) { try { IntPtr myHandle = ORCreateHive2(); } catch (Exception r) { MessageBox.Show(r.ToString()); } } } } I have been able to create pointers in the past with no issue utilizing user32.dll, icmp.dll, etc. However, I am having no such luck with offreg.dll. Thank you.

    Read the article

  • Codeplex/Sourceforge for internal use

    - by Josh
    I'm looking for a free/open source collaborative project manager that can be deployed internally in my workplace that would act similar to Codeplex or Sourceforge. Does anyone know of something like this, and if so do you have experience with it. Requirements: Open Source or Free Locally Deployable Has the same types of features found in Sourceforge / Codeplex Issue/Feature Tracking Community Interaction (ie. Voting, Roles, etc.) SCM Integration (Optional) .NET/Windows Friendly (Optional) Every business ends up having internal utilities, and domain specific apps that developers create to make life easier. Given the input of the internal developer community they have the potential to become much better (can you say GMail...), and I would simply like to foster such an environment internally by providing an easy place for that interaction to take place. UPDATE: So I like what I am seeing in both Trac and GForge, but both are heavily geared towards UNIX/Subversion environments. I should have specified this, but we are a MS shop from top to bottom. How practical do you think it is going to be to try and use these in a MS .NET environment? Would that be like trying to shove a square peg through a round hole?

    Read the article

  • How do I simulate the usage of a sequence of web pages?

    - by Rory Becker
    I have a simple sequence of web pages written in ASP.Net 3.5 SP1. Page1 - A Logon Form.... txtUsername, txtPassword and cmdLogon Page2 - A Menu (created using DevExpress ASP.Net controls) Page3 - The page redirected to by the server in the event that the user picks the right menu option in Page2 I would like to create a threaded program to simulate many users trying to use this sequence of pages. I have managed to create a host Winforms app which Launches a new thread for each "User" I have further managed to work out the basics of WebRequest enough to perform a request which retrieves the Logon page itself. Dim Request As HttpWebRequest = TryCast(WebRequest.Create("http://MyURL/Logon.aspx"), HttpWebRequest) Dim Response As HttpWebResponse = TryCast(Request.GetResponse(), HttpWebResponse) Dim ResponseStream As StreamReader = New StreamReader(Response.GetResponseStream(), Encoding.GetEncoding(1252)) Dim HTMLResponse As String = ResponseStream.ReadToEnd() Response.Close() ResponseStream.Close() Next I need to simulate the user having entered information into the 2 TextBoxes and pressing logon.... I have a hunch this requires me to add the right sort of "PostData" to the request. before submitting. However I'm also concerned that "ViewState" may be an issue. Am I correct regarding the PostData? How do I add the postData to the request? Do I need to be concerned about Viewstate? Update: While I appreciate that Selenium or similar products are useful for acceptance testing , I find that they are rather clumsy for what amounts to load testing. I would prefer not to load 100 instances of Firefox or IE in order to simulate 100 users hitting my site. This was the reason I was hoping to take the ASPNet HttpWebRequest route.

    Read the article

  • WCF Fails when using impersonation over 2 machine boundaries (3 machines)

    - by MrTortoise
    These scenarios work in their pieces. Its when i put it all together that it breaks. I have a WCF service using netTCP that uses impersonation to get the callers ID (role based security will be used at this level) on top of this is a WCF service using basicHTTP with TransportCredientialOnly which also uses impersonation I then have a client front end that connects to the basicHttp. the aim of the game is to return the clients username from the netTCP service at the bottom - so ultimatley i can use role based security here. each service is on a different machine - and each service works when you remove any calls they make to other services when you run a client for them both locally and remotley. IE the problem only manifests when you jump accross more than one machine boundary. IE the setup breaks when i connect each part together - but they work fine on their own. I also specify [OperationBehavior(Impersonation = ImpersonationOption.Required)] in the method and have IIS setup to only allow windows authentication (actually i have ananymous enabled still, but disabling makes no difference) This impersonation works fine in the scenario where i have a netTCP Service on Machine A with a client with a basicHttp service on machine B with a clinet for the basicHttp service also on machine B ... however if i move that client to any machine C i get the following error: The exception is 'The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:10:00'' the inner message is 'An existing connection was forcibly closed by the remote host' Am beginning to think this is more a network issue than config ... but then im grasping at straws ... the config files are as follows (heading from the client down to the netTCP layer) <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="basicHttpBindingEndpoint" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:10:00" sendTimeout="00:02:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://panrelease01/WCFTopWindowsTest/Service1.svc" binding="basicHttpBinding" bindingConfiguration="basicHttpBindingEndpoint" contract="ServiceReference1.IService1" name="basicHttpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour" /> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> the service for the client (basicHttp service and the client for the netTCP service) <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBindingEndpoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> <basicHttpBinding> <binding name="basicHttpWindows"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows"></transport> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="net.tcp://5d2x23j.panint.com/netTCPwindows/Service1.svc" binding="netTcpBinding" bindingConfiguration="netTcpBindingEndpoint" contract="ServiceReference1.IService1" name="netTcpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation" allowNtlm="true"/> </clientCredentials> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="WCFTopWindowsTest.Service1" behaviorConfiguration="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="basicHttpWindows" name ="basicHttpBindingEndpoint" contract ="WCFTopWindowsTest.IService1"> </endpoint> </service> </services> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration> then finally the service for the netTCP layer <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <authentication mode="Windows"></authentication> <authorization> <allow roles="*"/> </authorization> <compilation debug="true" targetFramework="4.0" /> <identity impersonate="true" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTCPwindows"> <security mode="Transport"> <transport clientCredentialType="Windows"></transport> </security> </binding> </netTcpBinding> </bindings> <services> <service behaviorConfiguration="netTCPwindows.netTCPwindowsBehaviour" name="netTCPwindows.Service1"> <endpoint address="" bindingConfiguration="netTCPwindows" binding="netTcpBinding" name="netTcpBindingEndpoint" contract="netTCPwindows.IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mextcp" binding="mexTcpBinding" contract="IMetadataExchange"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8721/test2" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="netTCPwindows.netTCPwindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="false" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration>

    Read the article

  • Using LinqExtender to make OData feed fails

    - by BurningIce
    A pretty simple question, has anyone here tried to make a OData feed based on a IQueryable created with LinqExtender? I have created a simple Linq-provider that supports Where, Select, OrderBy and Take and wanted to expose it as an OData Feed. I keep getting an error though, and the Exception is a NullReference with the following StackTrace at System.Data.Services.Serializers.Serializer.GetObjectKey(Object resource, IDataServiceProvider provider, String containerName) at System.Data.Services.Serializers.Serializer.GetUri(Object resource, IDataServiceProvider provider, ResourceContainer container, Uri absoluteServiceUri) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, Type expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__0.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) I've kinda narrowed it down to a issue where LinqExtender wraps every returned object, so that my object actually inherits itself - thats at least how it looks like in the debugger. These two queries are basicly the same. The first is the legacy-api where the OrderBy and Select is regular Linq to Objects. The second query is a "real" linq-provider made with LinqExtender. var db = CalendarDataProvider.GetCalendarEntriesByDate(DateTime.Now, DateTime.Now.AddMonths(1), Guid.Empty) .OrderBy(o => o.Title) .Select(o => new ODataCalendarEntry(o)); var query = new ODataCalendarEntryQuery() .Where(o => o.Start > DateTime.Now && o.End < DateTime.Now.AddMonths(1)) .OrderBy(o => o.Title); When returning db for the OData feed everything is fine, but returning query throws a NullRefenceException. I've tried all kind of tricks and even tried to project all the data into a new object like this, but still the same error return query.Select(o => new ODataCalendarEntry { Title = o.Title, Start = o.Start, End = o.End, Name = o.Name });

    Read the article

  • 404 Not Found When Requesting URI With Encoded Patameters

    - by Richard Knop
    I am pretty sure this is some problem with the Apache configuration because it used to work on the previous hosting provider with the same PHP/MySQL configuration. In my application, users are able to delete photos by going to URIs like this: http://example.com/my-account/remove-media/id/9/ret/my-account%252Fedit-album%252Fid%252F1 The paramater id is an id of a photo to be removed, the parameter ret is a relative URL where user should be redirected after the removal of the photo but after clicking on a link like that I get 404 Not Found error with the text: Not Found The requested URL /public/my-account/remove-media/id/9/ret/my-account/edit-album/id/1 was not found on this server. Though it used to work on my previous hosting provider so I guess it is just some easy Apache configuration issue? One more thing, there is a htaccess file that changes the document root to /public: RewriteEngine On php_value upload_max_filesize 15M php_value post_max_size 15M php_value max_execution_time 200 php_value max_input_time 200 # Exclude some directories from URI rewriting #RewriteRule ^(dir1|dir2|dir3) - [L] RewriteRule ^\.htaccess$ - [F] RewriteCond %{REQUEST_URI} ="" RewriteRule ^.*$ /public/index.php [NC,L] RewriteCond %{REQUEST_URI} !^/public/.*$ RewriteRule ^(.*)$ /public/$1 RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^.*$ - [NC,L] RewriteRule ^public/.*$ /public/index.php [NC,L] In the public folder there is a second htaccess file for MVC: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] # Turn off magic quotes #php_flag magic_quotes_gpc off

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • Is MVC 2 client-side validation broken in Visual Studio 2010 RC?

    - by Will
    I can't seem to get client side validation working with the version of MVC released with Visual Studio 2010 RC. I've tried it with two projects -- one upgrade from 1.0, and one using the template that came with VS. I'd think the template version would work, but it doesn't. Added the following scripts: <script type="text/javascript" src="<%= Url.Content("~/Scripts/MicrosoftMvcValidation.js") %>"> </script> <script type="text/javascript" src="<%= Url.Content("~/Scripts/jquery.validate.js")%>"> </script> which are downloaded to the client correctly. Added the following to my form page: <% Html.EnableClientValidation(); %> <%--yes, am aware of the EndForm() bug! --%> <% using (Html.BeginForm()) { %> <%--snip --%> and I can see the client validation scripts have been added to the bottom of the form. But still client validation never happens. What is worse is that in my upgraded project, the client validation scripts are never output in the page! PLEASE NOTE: I am specifically asking about the version of MVC2 that came with VS2010 RC. Also, I do know how to google; please don't waste anybody's time searching and answering if you aren't familiar with this issue in the release candidate of Visual Studio. Thanks.

    Read the article

  • How to join a table in symfony (Propel) and retrieve object from both table with one query

    - by Jean-Philippe
    Hi, I'm trying to get an easy way to fetch data from two joined Mysql table using Propel (inside Symfony) but in one query. Let's say I do this simple thing: $comment = CommentPeer::RetrieveByPk(1); print $comment->getArticle()->getTitle(); //Assuming the Article table is joined to the Comment table Symfony will call 2 queries to get that done. The first one to get the Comment row and the next one to get the Article row linked to the comment one. Now, I am trying to find a way to make all that within one query. I've tried to join them using $c = new Criteria(); $c->addJoin(CommentPeer::ARTICLE_ID, ArticlePeer::ID); $c->add(CommentPeer::ID, 1); $comment = CommentPeer::doSelectOne($c); But when I try to get the Article object using $comment->getArticle() It will still issue the query to get the Article row. I could easily clear all the selected columns and select the columns I need but that would not give me the Propel object I'd like, just an array of the query's raw result. So how can I get a populated propel object of two (or more) joined table with only one query? Thanks, JP

    Read the article

  • Passing Variable Length Arrays to a function

    - by David Bella
    I have a variable length array that I am trying to pass into a function. The function will shift the first value off and return it, and move the remaining values over to fill in the missing spot, putting, let's say, a -1 in the newly opened spot. I have no problem passing an array declared like so: int framelist[128]; shift(framelist); However, I would like to be able to use a VLA declared in this manner: int *framelist; framelist = malloc(size * sizeof(int)); shift(framelist); I can populate the arrays the same way outside the function call without issue, but as soon as I pass them into the shift function, the one declared in the first case works fine, but the one in the second case immediately gives a segmentation fault. Here is the code for the queue function, which doesn't do anything except try to grab the value from the first part of the array... int shift(int array[]) { int value = array[0]; return value; } Any ideas why it won't accept the VLA? I'm still new to C, so if I am doing something fundamentally wrong, let me know.

    Read the article

  • How to properly use references with variadic templates

    - by Hippicoder
    I have something like the following code: template<typename T1, typename T2, typename T3> void inc(T1& t1, T2& t2, T3& t3) { ++t1; ++t2; ++t3; } template<typename T1, typename T2> void inc(T1& t1, T2& t2) { ++t1; ++t2; } template<typename T1> void inc(T1& t1) { ++t1; } I'd like to reimplement it using the proposed variadic templates from the upcoming standard. However all the examples I've seen so far online seem to be printf like examples, the difference here seems to be the use of references. I've come up with the following: template<typename T> void inc(T&& t) { ++t; } template<typename T,typename ... Args> void inc(T&& t, Args&& ... args) { ++t inc(args...); } What I'd like to know is: Should I be using r-values instead of references? Possible hints or clues as to how to accomplish what I want correctly. What guarantees does the new proposed standard provide wrt the issue of the recursive function calls, is there some indication that the above variadic version will be as optimal as the original? (should I add inline or some-such?)

    Read the article

  • Completely bizarre Firefox CSS bug

    - by Jason
    I've been doing front end development for a long time, and I have NEVER come across a bug like this before... Save the following HTML to a file and view it in Firefox (mine is 3.6.3): <html xmlns="http://www.w3.org/1999/xhtml"> <head> <style type="text/css"> body { font-family: Helvetica, Sans-Serif;} h2 {font-weight: normal;} </style> </head> <body> <h2>Some normal text <strong>some bold text</strong> weird huh?</h2> </body> </html> If you don't want to give it a shot the output is like your cat walked across your keyboard while character map was turned on, except in the strong tags. I feel like this may be a font issue? When I get rid of font-weight: normal it goes back to normal, but I don't want everything to be bolded in my h2... Anyone have any ideas? More importantly, is anyone able to reproduce this?? Thanks.

    Read the article

  • How do I reset a form in an ajax callback?

    - by B.Gordon
    I am sending a form using simple ajax and returning the results in a div above the form. The problem is that after the form is submitted and validated, I display a thank you and want to reset the form so they don't just press the submit button again... Can't seem to find the right code to do this... <form id="myForm" target="sendemail.php" method="post"> <div id="results"></div> <input type="text" name="value1"> <input type="text" name="value2"> <input type="submit" name="submit"> </form> So, my sendemail.php validation errors and success messages appear in #results without problems. But... when I try to send back a javascript form reset command, it does not work. Naturally I cannot see it in the source code since it is an AJAX callback so I don't know if that is the issue or if I am just using the wrong syntax. echo "<p>Thank you. Your message has been accepted for delivery.</p>"; echo "<script type=\"text/javascript\">setTimeout('document.getElementById('myForm').reset();',1000);</script>"; Any ideas gurus?

    Read the article

  • Zend_ACL isAllowed causes issues with dojo

    - by churris43
    Hi all, I got an issue setting up Zend_Acl, I got it pretty well setup and running but I realised that in some forms where I'm using zend_dojo, dojo doesn't actualy gets loaded. Without going to I have setup my access list, as soon as I call the line isAllowed with the name of the resource taken from the request object, dojo is not loaded (I think) This is the code that breaks dojo: class MyPluginAcl extends Zend_Controller_Plugin_Abstract { public function __construct(Zend_Acl $acl) { $this->_acl = $acl; } public function preDispatch(Zend_Controller_Request_Abstract $request) { ..... $role = "guest" $resource = $request->getControllerName(); var_dump($resource) //Returns string(10)'myresource' $action = $request->getActionName(); if (!$this->_acl->isAllowed($role, $resource,$action)){ //Code to redirect somewhere } ...... } The thing that doesn't make sense are the following: If I do a var_dump($resource) I get a string(10)'myresource', still doesn't work If I set the $resource to be $resource = new Zend_Acl_Resource($request->getControllerName()); still doesn't work , but If I set $resource to have a string value, this whole thing works (eg. $resources = "myresource; it works. Any ideas ... Thanks

    Read the article

  • How do I exclude data from local table schema_migrations from being pushed to Heroku DB?

    - by Thierry Lam
    I was able to push my Ruby on Rails app with MySQL(local dev) to the Heroku server along with migrating my model with the command heroku rake db:migrate. I have also read the documentation on Database Import/Export. Is that doc referring to pushing actual data from my local dev DB to whichever Heroku's DB? Do I need to modify anything in the file database.yml to make it happen? I ran the following command: heroku db:push and I am getting the error: Sending data 2 tables, 3 records !!! Caught Server Exception | ETA: --:--:-- Taps Server Error: PGError ERROR: duplicate key value violates unique constraint "unique_schema_migrations" I have 2 tables, one I create for my app and the other schema_migrations. The total number of entries among the 2 tables is 3. I'm also printing the number of entries I have in the table I have created and it's showing 0. Any ideas what I might be missing or what I am doing wrong? EDIT: I figured out the above, Heroku's DB already have schema_migrations the moment I ran migrate. New question: Does anyone know how I can exclude data from a specific table from being pushed to Heroku DB. The table to exclude in this case will be schema_migrations. Not so good solution: I googled around and someone else was having the same issue. He suggested naming the schema_migrations table to zschema_migrations. In this way data from the other tables will be pushed properly until it fails on the last table. It's a pretty bad solution but will do for the time being. A better solution will be to use an existing Rails command which can reset a specific table from a database. I don't think Rake can do that.

    Read the article

  • PHP not obeying my defined ETags

    - by Sam Bisbee
    What I'm doing I'm pulling an image from the database and sending it to the browser with all the proper headers - the image displays fine. I also send an ETag header, using the SHA1 of the image's content as the tag. The images are getting called semi regularly, so caching is a bit of an issue (won't kill the site, but nice to have). The Problem $_SERVER['HTTP_IF_NONE_MATCH'] is not available to me. As far as I can tell, this is because of PHP's "disobey the cache controls" life style. I can't mess with the session cache limiter, because I don't have access. But, even if I did have access, I wouldn't want to touch it: 99% of the site is under WordPress. The Environment PHP 4 (don't ask) Apache 2.2 WordPress The images live in the database (largeblog), which I can't change. Any guidance, tip/tricks, etc. would be helpful. I don't have much room to change the environmental/structural stuff. Cheers.

    Read the article

  • Hibernate and parent/child relations

    - by Marco
    Hi to all, I'm using Hibernate in a Java application, and i feel that something could be done better for the management of parent/child relationships. I've a complex set of entities, that have some kind of relationships between them (one-to-many, many-to-many, one-to-one, both unidirectional and bidirectional). Every time an entity is saved and it has a parent, to estabilish the relationship the parent has to add the child to its collection (considering a one-to-may relationship). For example: Parent p = (Parent) session.load(Parent.class, pid); Child c = new Child(); c.setParent(p); p.getChildren().add(c); session.save(c); session.flush(); In the same way, if i remove a child then i have to explicitly remove it from the parent collection too. Child c = (Child) session.load(Child.class, cid); session.delete(c); Parent p = (Parent) session.load(Parent.class, pid); p.getChildren().remove(c); session.flush(); I was wondering if there are some best practices out there to do this jobs in a different way: when i save a child entity, automatically add it to the parent collection. If i remove a child, automatically update the parent collection by removing the child, etc. For example, Child c = new Child(); c.setParent(p); session.save(c); // Automatically update the parent collection session.flush(); or Child c = (Child) session.load(Child.class, cid); session.delete(c); // Automatically updates its parents (could be more than one) session.flush(); Anyway, it would not be difficult to implement this behaviour, but i was wondering if exist some standard tools or well known libraries that deals with this issue. And, if not, what are the reasons? Thanks

    Read the article

  • Is It Incorrect to Make Domain Objects Aware of The Data Access Layer?

    - by Noah Goodrich
    I am currently working on rewriting an application to use Data Mappers that completely abstract the database from the Domain layer. However, I am now wondering which is the better approach to handling relationships between Domain objects: Call the necessary find() method from the related data mapper directly within the domain object Write the relationship logic into the native data mapper (which is what the examples tend to do in PoEAA) and then call the native data mapper function within the domain object. Either it seems to me that in order to preserve the 'Fat Model, Skinny Controller' mantra, the domain objects have to be aware of the data mappers (whether it be their own or that they have access to the other mappers in the system). Additionally it seems that Option 2 unnecessarily complicates the data access layer as it creates table access logic across multiple data mappers instead of confining it to a single data mapper. So, is it incorrect to make the domain objects aware of the related data mappers and to call data mapper functions directly from the domain objects? Update: These are the only two solutions that I can envision to handle the issue of relations between domain objects. Any example showing a better method would be welcome.

    Read the article

  • How do I load an XML document, add and remove nodes, then apply it to a ASP DataGrid control?

    - by JFOX
    I have a pretty simple operation but am struggling with how to implement it. I am loading XML from an external data source using a DataSet.ReadXml(), the creating a new XMLDataDocument from that data set, then syncing the Dataset back to the XMLDataDocument like so: doc = new XmlDataDocument(dsDataSet); dsDataSet.EnforceConstraints = false; dsDataSet= doc.DataSet; Once loaded I do two things to the XmlDataDocument: Loop through and check if a purely meta node, count, exists right beneath the root node and if so remove it. a thumb node exists in a second level nodelist and if not, create and append it. This is all going a expected because the result of doc.save() looks correct. Where I'm having an issue is updating the Dataset, which is being applied as the data source for an ASP DataGrid. Once all the above XMLDoc manipaulation is done I do this: dsDataSet.Merge(doc.DataSet); dsDataSet.AcceptChanges(); I then apply the data set to the grid control: dgList.DataSource = dsDataSet; dgList.DataBind(); But, when I do this I get this error on the site: System.Web.HttpException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'thumb'. What did I miss?

    Read the article

  • Does Android XML Layout's 'include' Tag Really Work?

    - by Eric Burke
    I am unable to override attributes when using <include> in my Android layout files. When I searched for bugs, I found Declined Issue 2863: "include tag is broken (overriding layout params never works)" Since Romain indicates this works in the test suites and his examples, I must be doing something wrong. My project is organized like this: res/layout buttons.xml res/layout-land receipt.xml res/layout-port receipt.xml The buttons.xml contains something like this: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal"> <Button .../> <Button .../> </LinearLayout> And the portrait and landscape receipt.xml files look something like: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical"> ... <!-- Overridden attributes never work. Nor do attributes like the red background, which is specified here. --> <include android:id="@+id/buttons_override" android:background="#ff0000" android:layout_width="fill_parent" layout="@layout/buttons"/> </LinearLayout> What am I missing?

    Read the article

  • C# Byte[] to Url Friendly String

    - by LorenVS
    Hello, I'm working on a quick captcha generator for a simple site I'm putting together, and I'm hoping to pass an encrypted key in the url of the page. I could probably do this as a query string parameter easy enough, but I'm hoping not too (just because nothing else runs off the query string)... My encryption code produces a byte[], which is then transformed using Convert.ToBase64String(byte[]) into a string. This string, however, is still not quite url friendly, as it can contain things like '/' and '='. Does anyone know of a better function in the .NET framework to convert a byte array to a url friendly string? I know all about System.Web.HttpUtility.UrlEncode() and its equivalents, however, they only work properly with query string parameters. If I url encode an '=' inside of the path, my web server brings back a 400 Bad Request error. Anyways, not a critical issue, but hoping someone can give me a nice solution **EDIT: Just to be absolutely sure exactly what I'm doing with the string, I figured I would supply a little more information. The byte[] that results from my encryption algorithm should be fed through some sort of algorithm to make it into a url friendly string. After this, it becomes the content of an XElement, which is then used as the source document for an XSLT transformation, and is used as a part of the href attribute for an anchor. I don't believe the xslt transformation is causing the issues, since what is coming through on the path appears to be an encoded query string parameter, but causes the HTTP 400 I've also tried HttpUtility.UrlPathEncode() on a base64 string, but that doesn't seem to do the trick either (I still end up with '/'s in my url)**

    Read the article

  • Creating Multiple TextFields in runtime AS2

    - by ortho
    Hi lads, I have an issue generating multiple text fields in AS2. My AS2 Flash application calls database (via PHP) and then receives XML file that contains a few objects. All I want to do is to loop throught this XML objects and then create a TextField (actually a Component that contains graphics and TextField, but this will come later) based on the information from XML object. I know that I can create something like: _root.createTextField("myText1",1,0,0,100,20); myText1.text = "this is text ONE"; _root.createTextField("myText2",2,0,30,100,20); myText2.text = "this is text TWO"; which will result in 2 text fields, but the problem is when I try to create it dynamicly (e.g. I have item: myNode[0].attributes.name (but when I use it in: _root.createTextField(myNode[0].attributes.name, 1, 0, 0, 100, 20), then I got compile error). var myXML:XML = new XML(); myXML.ignoreWhite=true; myXML.load("tekst.xml"); var tekst:String = new String(); myXML.onLoad = function(success){ if (success){ var myNode = myXML.firstChild.childNodes; for (i=0; i trace("height: "+myNode[i].attributes.height); trace("color: "+myNode[i].attributes.color); trace(myNode[i].firstChild.nodeValue); } } } This actualy traces the values and I can actualy use them when creating the component, but it doesn't create the component that has the same name (obviously both instances point to the same object so the last is only visible). Please help, I tried many things, but no joy. Thank you in advance.

    Read the article

< Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >