Search Results

Search found 18220 results on 729 pages for 'null hypothesis'.

Page 308/729 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Java resource closing

    - by Bob
    Hi, I'm writing an app that connect to a website and read one line from it. I do it like this: try{ URLConnection connection = new URL("www.example.com").openConnection(); BufferedReader rd = new BufferedReader(new InputStreamReader(connection.getInputStream())); String response = rd.readLine(); rd.close(); }catch (Exception e) { //exception handling } Is it good? I mean, I close the BufferedReader in the last line, but I do not close the InputStreamReader. Should I create a standalone InputStreamReader from the connection.getInputStream, and a BufferedReader from the standalone InputStreamReader, than close all the two readers? I think it will be better to place the closing methods in the finally block like this: InputStreamReader isr = null; BufferedReader br = null; try{ URLConnection connection = new URL("www.example.com").openConnection(); isr = new InputStreamReader(connection.getInputStream()); br = new BufferedReader(isr); String response = br.readLine(); }catch (Exception e) { //exception handling }finally{ br.close(); isr.close(); } But it is ugly, because the closing methods can throw exception, so I have to handle or throw it. Which solution is better? Or what would be the best solution?

    Read the article

  • Can someone help me with this StructureMap error i'm getting?

    - by Pure.Krome
    Hi folks, I'm trying to wire up a simple ASP.NET MVC2 controller class to my own LoggingService. Code compiles fine, but I get the following runtime error :- {"StructureMap Exception Code: 202 No Default Instance defined for PluginFamily System.Type, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"} what the? mscorlib ???? Here's some sample code of my wiring up .. protected void Application_Start() { MvcHandler.DisableMvcResponseHeader = true; BootstrapStructureMap(); ControllerBuilder.Current.SetControllerFactory( new StructureMapControllerFactory()); RegisterRoutes(RouteTable.Routes); } private static void BootstrapStructureMap() { ObjectFactory.Initialize(x => x.For<ILoggingService>().Use<Log4NetLoggingService>()); } and finally the controller, simplified for this question ... public class SearchController : Controller { private readonly ILoggingService _log { get; set; } public SearchController(ILoggingService loggingService) : base(loggingService) { // Error checking removed for brevity. _log = loggingService; _log.Tag = "SearchController"; } ... } and the structuremap factory (main method), also way simplified for this question... protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { IController result = null; if (controllerType != null) { try { result = ObjectFactory.GetInstance(controllerType) as Controller; } catch (Exception exception) { if (exception is StructureMapException) { Debug.WriteLine(ObjectFactory.WhatDoIHave()); } } } } hmm. I just don't get it. StructureMap version 2.6.1.0 ASP.NET MVC 2. Any ideas?

    Read the article

  • Provisioning Api using java

    - by user268515
    Hi i'm working in java and tried to retrieve all the user in the domain for that i used Provisionin api............ Its working good But my idea is to Use 2-legged OAuth to retrieve the users from the domain Is it Possible? I dont how to specify the URL please Help me And i triede the following the program final String CONSUMER_KEY = "xxxxxxxxxx.com"; final String CONSUMER_SECRET = "12345678122154154df9"; final String DOMAIN = "xxxxxxxxxx.com"; GoogleOAuthParameters oauthParameters = new GoogleOAuthParameters(); oauthParameters.setOAuthConsumerKey(CONSUMER_KEY); oauthParameters.setOAuthConsumerSecret(CONSUMER_SECRET); oauthParameters.setOAuthType(OAuthType.TWO_LEGGED_OAUTH); OAuthHmacSha1Signer signer = new OAuthHmacSha1Signer(); URL feedUrl = new URL("https://apps-apis.google.com/a/feeds/" + DOMAIN + "/user/2.0/[email protected]"); userService = new UserService("Myapplication"); userService.setOAuthCredentials(oauthParameters, signer); userService.useSsl(); UserFeed allUsers = new UserFeed(); UserFeed allpage; Link nextLink; do { allpage = userService.getFeed(feedUrl, UserFeed.class); allUsers.getEntries().addAll(allpage.getEntries()); nextLink = allpage.getLink(Link.Rel.NEXT, Link.Type.ATOM); if (nextLink != null) { feedUrl = new URL(nextLink.getHref()); } }while (nextLink != null); return allUsers; } Its returing the error as com.google.gdata.util.AuthenticationException: Unknown authorization header

    Read the article

  • select always returns -1 while trying to read from socket and stdin

    - by Aleyna
    Hello I have the following code implemented on C++(Linux) to check on my listening socket and stdin using select. select however keeps returning -1 no matter what I try to do! What's wrong with that code :s I will appreciate any help. Thanks highsock = m_sock; //listening socket memset((char *) &connectlist, 0, sizeof(connectlist)); memset((char *) &socks, 0, sizeof(socks)); int readsocks; struct timeval timeout; timeout.tv_sec = 60; timeout.tv_usec = 0; while (1) { updateSelectList(); //cout << "highest sock: " << highsock << endl; tempreadset = readset; readsocks = select(highsock+1, &tempreadset, NULL, NULL, &timeout); //cout << "# ready: " << readsocks << endl; if (readsocks < 0) { if (errno == EINTR) continue; cout << "select error" << endl; } if (readsocks > 0) { readFromSocks(); } } void readFromSocks() { if (FD_ISSET(STDIN, &readset)) { ... } else if (FD_ISSET(m_sock, &readset)) { ... } } void updateSelectList() { FD_ZERO(&readset); FD_SET(STDIN, &readset); FD_SET(m_sock, &readset); for (int i=0; i<MAXCONNECTIONS; i++) { if (connectlist[i] != 0) { FD_SET(connectlist[i], &readset); if (connectlist[i] > highsock) highsock = connectlist[i]; } } highsock = max(max(m_sock, STDIN), highsock); }

    Read the article

  • Where to put a textfile I want to use in eclipse?

    - by Jayomat
    hi there, I need to read a text file when I start my program. I'm using eclipse and started a new java project. In my project folder I got the "src" folder and the standard "JRE System Library"... I just don't know where to put the text file. I cannot use a "hard coded" path because the text file needs to be included with my app... I also tried to create a new folder like "Files" and use the pathe "Files/staedteliste.txt".. doesn't work too... I use the following code to read the file, but I get this error: Error:java.io.FileNotFoundException:staedteliste.txt(No such file or directory) import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; import java.util.ArrayList; public class Test { ArrayList<String[]> values; public static void main(String[] args) { // TODO Auto-generated method stub loadList(); } public static void loadList() { BufferedReader reader; String zeile = null; try { reader = new BufferedReader(new FileReader("staedteliste.txt")); zeile = reader.readLine(); ArrayList<String[]> values = new ArrayList<String[]>(); while (zeile != null) { values.add(zeile.split(";")); zeile = reader.readLine(); } System.out.println(values.size()); System.out.println(zeile); } catch (IOException e) { System.err.println("Error :"+e); } } } thx for help!

    Read the article

  • Spring MVC with annotations: how to beget that method always is called

    - by TheStijn
    hi, I'm currently migrating a project that is using Spring MVC without annotations to Spring MVC with annotations. This is causing less problems than expected but I did come across one issue. In my project I have set up an access mechanisme. Whether or not a User has access to a certain view depends on more than just the role of the User (e.g. it also depends on the status of the entity, the mode (view/edit), ...). To address this I had created an abstract parent controller which has a method hasAccess. This method calls also other methods like getAllowedEditStatuses which are here and there overridden by the child controllers. The hasAccess method gets called from the showForm method (below code was minimized for your readability): @Override protected ModelAndView showForm(final HttpServletRequest request, final HttpServletResponse response, final BindException errors) throws Exception { Integer id = Integer.valueOf(request.getParameter("ID")); Project project = this.getProject(id); if (!this.hasAccess(project, this.getActiveUser())) { return new ModelAndView("errorNoAccess", "code", project != null ? project.getCode() : null); } return this.showForm(request, response, project, errors); } So, if the User has no access to the view then he gets redirected to an error page. Now the 'pickle': how to set this up when using annotations. There no longer is a showForm or other method that is always called by the framework. My (and maybe your) first thought was: simply call this method from within each controller before going to the view. This would of course work but I was hoping for a nicer, more generic solution (less code duplication). The only other solution I could think of is preceeding the hasAccess method with the @ModelAttribute annotation but this feels a lot like raping the framework :-). So, does anyone have a (better) idea? thanks, Stijn

    Read the article

  • .NET Hashtable - "Same" key, different hashes

    - by Simon Lindgren
    Is it possible for two .net strings to have different hashes? I have a Hashtable with amongst others the key "path". When I loop through the elements in the table to print it, i can see that the key exists. When trying to looking it up however, there is no matching element. Debugging suggests that the string I'm looking for has a different hash than the one I'm supplying as the key. This code is in a Castle Monorail project, using brail as a view engine. The key I'm looking for is inserted by a brail line like this: UrlHelper.Link(node.CurrentPage.LinkText, {@params: {@path: "/Page1"}}) Then, in this method (in a custom IRoutingRule): public string CreateUrl(System.Collections.IDictionary parameters) { PrintDictionaryToLog(parameters); string url; if (parameters.Contains("path")) { url = (string)parameters["path"]; } else { return null; } } The key is printed to the log, but the function returns null. I didn't know this could even be a problem with .net strings, but I guess this is some kind of encoding issue? Oh, and this is running mono.

    Read the article

  • MonoRail ActiveRecord - The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE

    - by Justin
    Hey all, I'm new to MonoRail and ActiveRecord and have inherited an application that I need to modify. I have a Category table (Id, Name) and I'm adding a ParentId FK which references the Id in the same table so that you can have parent/child category relationships. I tried adding this to my Category model class: [Property] public int ParentId { get; set; } I also tried doing it this way: [BelongsTo("ParentId")] public Category Parent { get; set; } When I call a method to get all parent categories (they have a null ParentId), it works fine: public static Category[] GetParentCategories() { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", null)); } However, when I call a method to get all child categories within a specific category, it errors out: public static Category[] GetChildCategories(int parentId) { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", parentId)); } The error is: "The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE constraint \"FK_Category_ParentId\". The conflict occurred in database \"UCampus\", table \"dbo.Category\", column 'Id'.\r\nThe statement has been terminated." I'm hard-coding in the parentId parameter as 1 and I'm 100% sure it exists as an id in the Category table so I don't know why it'd give this error. Also, I'm doing a select, not an update, so what is going on here?? Thanks for any input on this, Justin

    Read the article

  • How to loop video using NetStream in Data Generation Mode

    - by WesleyJohnson
    I'm using a NetStream in Data Generation Mode to play an embeded FLV using appendBytes. When the stream is finished playing, I'd like to loop the FLV file. I'm not sure how to achieve this. Here is what I have so far (this isn't a complete example): public function createBorderAnimation():void { // Load the skin image borderAnimation = Assets.BorderAnimation; // Convert the animation to a byte array borderAnimationBytes = new borderAnimation(); // Initialize the net connection border_nc = new NetConnection(); border_nc.connect( null ); // Initialize the net stream border_ns = new NetStream( border_nc ); border_ns.client = { onMetaData:function( obj:Object ):void{ trace(obj); } } border_ns.addEventListener( NetStatusEvent.NET_STATUS, border_netStatusHandler ); border_ns.play( null ); border_ns.appendBytes( borderAnimationBytes ); // Initialize the animation border_vd = new Video( 1024, 768 ); border_vd.attachNetStream( border_ns ); // Add the animation to the stage ui = new UIComponent(); ui.addChild( DisplayObject( border_vd ) ); grpBackground.addElement( ui ); } protected function border_netStatusHandler( event:NetStatusEvent ):void { if( event.info.code == "NetStream.Buffer.Flush" || event.info.code == "NetStream.Buffer.Empty" ) { border_ns.appendBytesAction( NetStreamAppendBytesAction.RESET_BEGIN ); border_ns.appendBytes( borderAnimationBytes ); border_ns.appendBytesAction( NetStreamAppendBytesAction.END_SEQUENCE ); } } This will loop the animation, but it starts chewing up memory like crazy. I've tried using NetStream.seek(0) and NetStream.appendBytesAction( NetStreamAppendBytesAction.RESET_SEEK ), but then I'm not sure what to do next. If you just try to call appendBytes again after that, it doesn't work, presumably because I'm appending the full byte array which has the FLV header and stuff? I'm not very familiar with how that all works. Any help is greatly appreciated.

    Read the article

  • C# COM Cross Thread problem

    - by user364676
    Hi, we're developing a software to control a scientific measuring device. it provides a COM-Interface defines serveral functions to set measurement parameters and fires an event when it measured data. in order to test our software, i'm implementing a simulation of that device. the com-object runs a loop which periodically fires the event. another loop in the client app should now setup up the com-simulator using the given functions. i created a class for measuring parameters which will be instanciated when setting up a new measurement. // COM-Object public class MeasurementParams { public double Param1; public double Param2; } public class COM_Sim : ICOMDevice { public MeasurementParams newMeasurement; IClient client; public int NewMeasurement() { newMeasurment = new MeasurementParam(); } public int SetParam1(double val) { // why is newMeasurement null when method is called from client loop newMeasurement.Param1 = val; } void loop() { while(true) { // fire event client.HandleEvent; } } } public class Client : IClient { ICOMDevice server; public int HandleEvent() { // handle this event server.NewMeasurement(); server.SetParam1(0.0); } void loop() { while(true) { // do some stuff... server.NewMeasurement(); server.SetParam1(0.0); } } } both of the loops run in independent threads. when server.NewMeasurement() is called, the object on the server is set to a new instance. but in the next function, the object is null again. do the same when handling the server-event, it works perfectly, because the method runs in the servers thread. how to make it work from client-thread as well. as the client is meant to be working with the real device, i cannot modify the interfaces given by the manufactor. also i need to setup measurements independent from the event-handler, which will be fired not regulary. i assume this problem related to multithreaded-COM behavior but i found nothing on this topic.

    Read the article

  • Calling unmanaged code from within C#

    - by Charles Gargent
    I am trying to use a dll in my c# program but I just cant seem to get it to work. I have made a test app shown below. The return value is 0, however it does not actually do what it is supposed to do. Whereas the following command does work: rundll32 cmproxy.dll,SetProxy /source_filename proxy-1.txt /backup_filename roxy.bak /DialRasEntry NULL /TunnelRasEntry DSLVPN /Profile "C:\Documents and ettings\Administrator\Application Data\Microsoft\Network\Connections\Cm\dslvpn.cmp" Code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Security.Cryptography; using System.Runtime.InteropServices; using System.Net; using WUApiLib; namespace nac { class Program { [DllImport("cmproxy.dll", CharSet = CharSet.Unicode)] static extern int SetProxy(string cmdLine); static void Main(string[] args) { string cmdLine = @"/source_filename proxy-1.txt /backup_filename proxy.bak /DialRasEntry NULL /TunnelRasEntry DSLVPN /Profile ""C:\Documents and Settings\Administrator\Application Data\Microsoft\Network\Connections\Cm\dslvpn.cmp"""; Console.WriteLine(SetProxy(cmdLine)); } } } Here is the contents of the dumpbin /exports command File Type: DLL Section contains the following exports for cmproxy.dll 00000000 characteristics 3E7FEF8C time date stamp Tue Mar 25 05:56:28 2003 0.00 version 1 ordinal base 1 number of functions 1 number of names ordinal hint RVA name 1 0 00001B68 SetProxy Summary 1000 .data 1000 .reloc 1000 .rsrc 2000 .text When this works it sets the proxy server for a VPN connection.

    Read the article

  • Saving a record in Authlogic table

    - by denniss
    I am using authlogic to do my authentication. The current model that serves as the authentication model is the user model. I want to add a "belongs to" relationship to user which means that I need a foreign key in the user table. Say the foreign key is called car_id in the user's model. However, for some reason, when I do u = User.find(1) u.car_id = 1 u.save! I get ActiveRecord::RecordInvalid: Validation failed: Password can't be blank My guess is that this has something to do with authlogic. I do not have validation on password on the user's model. This is the migration for the user's table. def self.up create_table :users do |t| t.string :email t.string :first_name t.string :last_name t.string :crypted_password t.string :password_salt t.string :persistence_token t.string :single_access_token t.string :perishable_token t.integer :login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.integer :failed_login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.datetime :last_request_at # optional, see Authlogic::Session::MagicColumns t.datetime :current_login_at # optional, see Authlogic::Session::MagicColumns t.datetime :last_login_at # optional, see Authlogic::Session::MagicColumns t.string :current_login_ip # optional, see Authlogic::Session::MagicColumns t.string :last_login_ip # optional, see Authlogic::Session::MagicColumns t.timestamps end end And later I added the car_id column to it. def self.up add_column :users, :user_id, :integer end Is there anyway for me to turn off this validation?

    Read the article

  • Setting Position of source and listener has no effect

    - by Ben E
    Hi Guys, First time i've worked with OpenAL, and for the life of my i can't figure out why setting the position of the source doesn't have any effect on the sound. The sounds are in stero format, i've made sure i set the listener position, the sound is not realtive to the listener and OpenAL isn't giving out any error. Can anyone shed some light? Create Audio device ALenum result; mDevice = alcOpenDevice(NULL); if((result = alGetError()) != AL_NO_ERROR) { std::cerr << "Failed to create Device. " << GetALError(result) << std::endl; return; } mContext = alcCreateContext(mDevice, NULL); if((result = alGetError()) != AL_NO_ERROR) { std::cerr << "Failed to create Context. " << GetALError(result) << std::endl; return; } alcMakeContextCurrent(mContext); SoundListener::SetListenerPosition(0.0f, 0.0f, 0.0f); SoundListener::SetListenerOrientation(0.0f, 0.0f, -1.0f); The two listener functions call alListener3f(AL_POSITION, x, y, z); Real vec[6] = {x, y, z, 0.0f, 1.0f, 0.0f}; alListenerfv(AL_ORIENTATION, vec); I set the sources position to 1,0,0 which should be to the right of the listener but it has no effect alSource3f(mSourceHandle, AL_POSITION, x, y, z); Any guidance would be much appreciated

    Read the article

  • BindingList<T> and reflection!

    - by Aren B
    Background Working in .NET 2.0 Here, reflecting lists in general. I was originally using t.IsAssignableFrom(typeof(IEnumerable)) to detect if a Property I was traversing supported the IEnumerable Interface. (And thus I could cast the object to it safely) However this code was not evaluating to True when the object is a BindingList<T>. Next I tried to use t.IsSubclassOf(typeof(IEnumerable)) and didn't have any luck either. Code /// <summary> /// Reflects an enumerable (not a list, bad name should be fixed later maybe?) /// </summary> /// <param name="o">The Object the property resides on.</param> /// <param name="p">The Property We're reflecting on</param> /// <param name="rla">The Attribute tagged to this property</param> public void ReflectList(object o, PropertyInfo p, ReflectedListAttribute rla) { Type t = p.PropertyType; //if (t.IsAssignableFrom(typeof(IEnumerable))) if (t.IsSubclassOf(typeof(IEnumerable))) { IEnumerable e = p.GetValue(o, null) as IEnumerable; int count = 0; if (e != null) { foreach (object lo in e) { if (count >= rla.MaxRows) break; ReflectObject(lo, count); count++; } } } } The Intent I want to basically tag lists i want to reflect through with the ReflectedListAttribute and call this function on the properties that has it. (Already Working) Once inside this function, given the object the property resides on, and the PropertyInfo related, get the value of the property, cast it to an IEnumerable (assuming it's possible) and then iterate through each child and call ReflectObject(...) on the child with the count variable.

    Read the article

  • Flash "visible" issue

    - by justkevin
    I'm writing a tool in Flex that lets me design composite sprites using layered bitmaps and then "bake" them into a low overhead single bitmapData. I've discovered a strange behavior I can't explain: toggling the "visible" property of my layers works twice for each layer (i.e., I can turn it off, then on again) and then never again for that layer-- the layer stays visible from that point on. If I override "set visible" on the layer as such: override public function set visible(value:Boolean):void { if(value == false) this.alpha = 0; else {this.alpha = 1;} } The problem goes away and I can toggle "visibility" as much as I want. Any ideas what might be causing this? Edit: Here is the code that makes the call: private function onVisibleChange():void { _layer.visible = layerVisible.selected; changed(); } The changed() method "bakes" the bitmap: public function getBaked():BitmapData { var w:int = _composite.width + (_atmosphereOuterBlur * 2); var h:int = _composite.height + (_atmosphereOuterBlur * 2); var bmpData:BitmapData = new BitmapData(w,h,true,0x00000000); var matrix:Matrix = new Matrix(); var bounds:Rectangle = this.getBounds(this); matrix.translate(w/2,h/2); bmpData.draw(this,matrix,null,null,new Rectangle(0,0,w,h),true); return bmpData; } Incidentally, while the layer is still visible, using the Flex debugger I can verify that the layer's visible value is "false".

    Read the article

  • Mocking Autofac's "Resolve" extension method with TypeMock

    - by Lockshopr
    I'm trying to mock an Autofac resolve, such as: using System; using Autofac; using TypeMock.ArrangeActAssert; class Program { static void Main(string[] args) { var inst = Isolate.Fake.Instance<IContainer>(); Isolate.Fake.StaticMethods(typeof(ResolutionExtensions), Members.ReturnNulls); Isolate.WhenCalled(() => inst.Resolve<IRubber>()).WillReturn(new BubbleGum()); Console.Out.WriteLine(inst.Resolve<IRubber>()); } } public interface IRubber {} public class BubbleGum : IRubber {} Coming from Moq, the syntax and exceptions from TypeMock confuse me a great deal. Having initially run this in a TestMethod, I kept getting an exception resembling "WhenCalled cannot be run without a complementing behavior". I tried defining behaviors for everyone and their mothers, but to no avail. Then I debug stepped through the test run, and saw that an actual exception was fired from Autofac: IRubber has not been registered. So it's obvious that the static Resolve function isn't being faked, and I can't get it to be faked, no matter how I go about hooking it up. Isolate.WhenCalled(() => ResolutionExtensions.Resolve<IRubber>(null)).WillReturn(new BubbleGum()); ... throws an exception from Autofac complaining that the IComponentContext cannot be null. Feeding it the presumably faked IContainer (or faking an IComponentContext instead) gets me back to the "IRubber not registered" error.

    Read the article

  • OdbcCommand on Stored Procedure - "Parameter not supplied" error on Output parameter

    - by Aaron
    I'm trying to execute a stored procedure (against SQL Server 2005 through the ODBC driver) and I recieve the following error: Procedure or Function 'GetNodeID' expects parameter '@ID', which was not supplied. @ID is the OUTPUT parameter for my procedure, there is an input @machine which is specified and is set to null in the stored procedure: ALTER PROCEDURE [dbo].[GetNodeID] @machine nvarchar(32) = null, @ID int OUTPUT AS BEGIN SET NOCOUNT ON; IF EXISTS(SELECT * FROM Nodes WHERE NodeName=@machine) BEGIN SELECT @ID = (SELECT NodeID FROM Nodes WHERE NodeName=@machine) END ELSE BEGIN INSERT INTO Nodes (NodeName) VALUES (@machine) SELECT @ID = (SELECT NodeID FROM Nodes WHERE NodeName=@machine) END END The following is the code I'm using to set the parameters and call the procedure: OdbcCommand Cmd = new OdbcCommand("GetNodeID", _Connection); Cmd.CommandType = CommandType.StoredProcedure; Cmd.Parameters.Add("@machine", OdbcType.NVarChar); Cmd.Parameters["@machine"].Value = Environment.MachineName.ToLower(); Cmd.Parameters.Add("@ID", OdbcType.Int); Cmd.Parameters["@ID"].Direction = ParameterDirection.Output; Cmd.ExecuteNonQuery(); _NodeID = (int)Cmd.Parameters["@Count"].Value; I've also tried using Cmd.ExecuteScalar with no success. If I break before I execute the command, I can see that @machine has a value. If I execute the procedure directly from Management Studio, it works correctly. Any thoughts? Thanks

    Read the article

  • How can I read the properties of an object that I assign to the Session in ASP.NET MVC?

    - by quakkels
    Hey all, I'm trying my hand at creating a session which stores member information which the application can use to reveal certain navigation and allow access to certain pages and member role specific functionality. I've been able to assign my MemberLoggedIn object to the session in this way: //code excerpt start... MemberLoggedIn loggedIn = new MemberLoggedIn(); if (computedHash == member.Hash) { loggedIn.ID = member.ID; loggedIn.Username = member.Username; loggedIn.Email = member.Email; loggedIn.Superuser = member.Superuser; loggedIn.Active = member.Active; Session["loggedIn"] = loggedIn; } else if (ModelState.IsValid) { ModelState.AddModelError("Password", "Incorrect Username or Password."); } return View(); That works great. I then can send the properties of Session["loggedIn"] to the View in this way: [ChildActionOnly] public ActionResult Login() { if (Session["loggedIn"] != null) ViewData.Model = Session["loggedIn"]; else ViewData.Model = null; return PartialView(); } In the Partial View I can reference the session data by using Model.Username or Model.Superuser. However, it doesn't seem to work that way in the controller or in a custom Action Filter. Is there a way to get the equivalent of Session["loggedIn"].Username?

    Read the article

  • Strange problem with vectors.

    - by Catalin Dumitru
    I have a really strange problem with stl vectors in which the wrong destructor is called for the right object when I call the erase method if that makes any sense. My code looks something like this: for(vector<Category>::iterator iter = this->children.begin(); iter != this->children.end(); iter++) { if((*iter).item == item) { this->children.erase(iter); return; } ------------------------- } It's just a simple function that finds the element in the vector which has some item to be searched, and removes said element from the vector. My problem is than when the erase function is called, and thus the object which the iterator is pointing at is being destroyed, the wrong destructor is being called. More specific the destructor of the last element in the vector is being called, and not of the actual object being removed. Thus the memory is being removed from the wrong object, which will still be an element in the vector, and the actual object which is removed from the vector, still has all of it's memory intact. The costructor of the object looks like this: Category::Category(const Category &from) { this->name = from.name; for(vector<Category>::const_iterator iter = from.children.begin(); iter != from.children.end(); iter++) this->children.push_back((*iter)); this->item = new QTreeWidgetItem; } And the destructor Category::~Category() { this->children.clear(); if(this->item != NULL) { QTreeWidgetItem* parent = this->item->parent(); if(parent != NULL) parent->removeChild(this->item); delete this->item; } }

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • How do I force or add the content length for ajax type POST requests in Firefox?

    - by Jayson
    I'm trying to POST a http request using ajax, but getting a response from the apache server using modsec_audit that: "POST request must have a Content-Length header." I do not want to disable this in modsec_audit. This occurs only in firefox, and not IE. Further, I switched to using a POST rather than a GET to keep IE from caching my results. This is a simplified version of the code I'm using for the request, I'm not using any javascript framework. function getMyStuff(){ var SearchString = ''; /* build search string */ ... /* now do request */ var xhr = createXMLHttpRequest(); var RequestString = 'someserverscript.cfm' + SearchString; xhr.open("POST", RequestString, true); xhr.onreadystatechange = function(){ processResponse(xhr); } xhr.send(null); } function processResponse(xhr){ var serverResponse = xhr.responseText; var container = document.getElementById('myResultsContainer'); if (xhr.readyState == 4){ container.innerHTML = serverResponse; } } function createXMLHttpRequest(){ try { return new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) {} try { return new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) {} try { return new XMLHttpRequest(); } catch(e) {} return null; } How do I force or add the content length for ajax type POST requests in Firefox?

    Read the article

  • Switch statement usage - C

    - by Jamie Keeling
    Hello, I have a thread function on Process B that contains a switch to perform certain operations based on the results of an event sent from Process A, these are stored as two elements in an array. I set the first element to the event which signals when Process A has data to send and I have the second element set to the event which indicates when Process A has closed. I have began to implement the functionality for the switch statement but I'm not getting the results as I expect. Consider the following: // //Thread function DWORD WINAPI ThreadFunc(LPVOID passedHandle) { for(i = 0; i < 2; i++) { ghEvents[i] = OpenEvent(EVENT_ALL_ACCESS, FALSE, TEXT("Global\\ProducerEvents")); if(ghEvents[i] == NULL) { getlasterror = GetLastError(); } } dwProducerEventResult = WaitForMultipleObjects( 2, ghEvents, FALSE, INFINITE); switch (dwProducerEventResult) { case WAIT_OBJECT_0 + 0: { //Producer sent data //unpackedHandle = *((HWND*)passedHandle); MessageBox(NULL,L"Test",L"Test",MB_OK); break; } case WAIT_OBJECT_0 + 1: { //Producer closed ExitProcess(1); break; } default: return; } } As you can see if the event in the first array is signalled Process B should display a simple message box, if the second array is signalled the application should close. When I actually close Process A, Process B displays the message box instead. If I leave the first case blank (Do nothing) both applications close as they should. Furthermore Process B sends data an error is thrown (When I comment out the unpacking): Have I implemented my switch statement incorrectly? I though I handled the unpacking of the HWND correctly too, any suggestions? Thanks for your time.

    Read the article

  • TSQL to insert an ascending value

    - by David Neale
    I am running some SQL that identifies records which need to be marked for deletion and to insert a value into those records. This value must be changed to render the record useless and each record must be changed to a unique value because of a database constraint. UPDATE Users SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 FROM Users a LEFT OUTER JOIN #ADUSERS b ON a.Username = 'AVSOMPOL\' + b.sAMAccountName WHERE (b.sAMAccountName is NULL AND a.Username LIKE 'AVSOMPOL%') OR b.userAccountControl = 514 This is the important bit: SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 What I've tried to do is have deleted records have their Username field set to 'Deletedxxx'. The ISNULL is needed because there may be no records matching the SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%' statement and this will return NULL. I get a syntax error when trying to parse this (Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'SELECT'. Msg 102, Level 15, State 1, Line 2 Incorrect syntax near ')'. I'm sure there must be a better way to go about this, any ideas?

    Read the article

  • .Net Finalizer Order / Semantics in Esent and Ravendb

    - by mattcodes
    Help me understand. I've read that "The time and order of execution of finalizers cannot be predicted or pre-determined" Correct? However looking at RavenDB source code TransactionStorage.cs I see this ~TransactionalStorage() { try { Trace.WriteLine( "Disposing esent resources from finalizer! You should call TransactionalStorage.Dispose() instead!"); Api.JetTerm2(instance, TermGrbit.Abrupt); } catch (Exception exception) { try { Trace.WriteLine("Failed to dispose esent instance from finalizer because: " + exception); } catch { } } } The API class (which belongs to Managed Esent) which presumable takes handles on native resources presumably using a SafeHandle? So if I understand correctly the native handles SafeHandle can be finalized before TransactionStorage which could have undesired effects, perhaps why Ayende has added an catch all clause around this? Actually diving into Esent code, it does not use SafeHandles. According to CLR via C# this is dangerous? internal static class SomeType { [DllImport("Kernel32", CharSet=CharSet.Unicode, EntryPoint="CreateEvent")] // This prototype is not robust private static extern IntPtr CreateEventBad( IntPtr pSecurityAttributes, Boolean manualReset, Boolean initialState, String name); // This prototype is robust [DllImport("Kernel32", CharSet=CharSet.Unicode, EntryPoint="CreateEvent")] private static extern SafeWaitHandle CreateEventGood( IntPtr pSecurityAttributes, Boolean manualReset, Boolean initialState, String name) public static void SomeMethod() { IntPtr handle = CreateEventBad(IntPtr.Zero, false, false, null); SafeWaitHandle swh = CreateEventGood(IntPtr.Zero, false, false, null); } } Managed Esent (NativeMEthods.cs) looks like this (using Ints vs IntPtrs?): [DllImport(EsentDll, CharSet = EsentCharSet, ExactSpelling = true)] public static extern int JetCreateDatabase(IntPtr sesid, string szFilename, string szConnect, out uint dbid, uint grbit); Is Managed Esent handling finalization/dispoal the correct way, and second is RavenDB handling finalizer the corret way or compensating for Managed Esent?

    Read the article

  • Rails: Oracle constraint violation

    - by justinbach
    I'm doing maintenance work on a Rails site that I inherited; it's driven by an Oracle database, and I've got access to both development and production installations of the site (each with its own Oracle DB). I'm running into an Oracle error when trying to insert data on the production site, but not the dev site: ActiveRecord::StatementInvalid (OCIError: ORA-00001: unique constraint (DATABASE_NAME.PK_REGISTRATION_OWNERSHIP) violated: INSERT INTO registration_ownerships (updated_at, company_ownership_id, created_by, updated_by, registration_id, created_at) VALUES ('2006-05-04 16:30:47', 3, NULL, NULL, 2920, '2006-05-04 16:30:47')): /usr/local/lib/ruby/gems/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:221:in `execute' app/controllers/vendors_controller.rb:94:in `create' As far as I can tell (I'm using Navicat as an Oracle client), the DB schema for the dev site is identical to that of the live site. I'm not an Oracle expert; can anyone shed light on why I'd be getting the error in one installation and not the other? Incidentally, both dev and production registration_ownerships tables are populated with lots of data, including duplicate entries for country_ownership_id (driven by index PK_REGISTRATION_OWNERSHIP). Please let me know if you need more information to troubleshoot. I'm sorry I haven't given more already, but I just wasn't sure which details would be helpful.

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >