Search Results

Search found 17407 results on 697 pages for 'static constructor'.

Page 560/697 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • Method binding to base method in external library can't handle new virtual methods "between"

    - by Berg
    Lets say I have a library, version 1.0.0, with the following contents: public class Class1 { public virtual void Test() { Console.WriteLine( "Library:Class1 - Test" ); Console.WriteLine( "" ); } } public class Class2 : Class1 { } and I reference this library in a console application with the following contents: class Program { static void Main( string[] args ) { var c3 = new Class3(); c3.Test(); Console.ReadKey(); } } public class Class3 : ClassLibrary1.Class2 { public override void Test() { Console.WriteLine("Console:Class3 - Test"); base.Test(); } } Running the program will output the following: Console:Class3 - Test Library:Class1 - Test If I build a new version of the library, version 2.0.0, looking like this: public class Class1 { public virtual void Test() { Console.WriteLine( "Library:Class1 - Test V2" ); Console.WriteLine( "" ); } } public class Class2 : Class1 { public override void Test() { Console.WriteLine("Library:Class2 - Test V2"); base.Test(); } } and copy this version to the bin folder containing my console program and run it, the results are: Console:Class3 - Test Library:Class1 - Test V2 I.e, the Class2.Test method is never executed, the base.Test call in Class3.Test seems to be bound to Class1.Test since Class2.Test didn't exist when the console program was compiled. This was very surprising to me and could be a big problem in situations where you deploy new versions of a library without recompiling applications. Does anyone else have experience with this? Are there any good solutions? This makes it tempting to add empty overrides that just calls base in case I need to add some code at that level in the future...

    Read the article

  • How to make Class for DataObjectAttribute visible in ObjectDataSourse in Web Application

    - by nCdy
    here is a class code : > [DataObjectAttribute] public class > Report { public this() {} > > > [DataObjectMethodAttribute(DataObjectMethodType.Select, > true)] public static > GetAllEmployees() : DataTable { > null } > > > [DataObjectMethodAttribute(DataObjectMethodType.Delete, > true)] public > DeleteEmployeeByID(employeeID : int) : > void { > throw Exception("The value passed to the delete method is " + > employeeID.ToString()); } } but I still can't find where and how and what I must to config to access it ? <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" SelectMethod=" ?????????? "> </asp:ObjectDataSource> Web Application doesn't support App_Code so but I can use compiled Bin somehow, the question is how ? text from this link only confused me more :( thank you

    Read the article

  • boost::serialization with mutable members

    - by redmoskito
    Using boost::serialization, what's the "best" way to serialize an object that contains cached, derived values in mutable members, such that cached members aren't serialized, but on deserialization, they are initialized the their appropriate default. A definition of "best" follows later, but first an example: class Example { public: Example(float n) : num(n), sqrt_num(-1.0) {} float get_num() const { return num; } // compute and cache sqrt on first read float get_sqrt() const { if(sqrt_num < 0) sqrt_num = sqrt(num); return sqrt_num; } template <class Archive> void serialize(Archive& ar, unsigned int version) { ... } private: float num; mutable float sqrt_num; }; On serialization, only the "num" member should be saved. On deserialization, the sqrt_num member must be initialized to its sentinel value indicating it needs to be computed. What is the most elegant way to implement this? In my mind, an elegant solution would avoid splitting serialize() into separate save() and load() methods (which introduces maintenance problems). One possible implementation of serialize: template <class Archive> void serialize(Archive& ar, unsigned int version) { ar & num; sqrt_num = -1.0; } This handles the deserialization case, but in the serialization case, the cached value is killed and must be recomputed. Also, I've never seen an example of boost::serialize that explicitly sets members inside of serialize(), so I wonder if this is generally not recommended. Some might suggest that the default constructor handles this, for example: int main() { Example e; { std::ifstream ifs("filename"); boost::archive::text_iarchive ia(ifs); ia >> e; } cout << e.get_sqrt() << endl; return 0; } which works in this case, but I think fails if the object receiving the deserialized data has already been initialized, as in the example below: int main() { Example ex1(4); Example ex2(9); cout << ex1.get_sqrt() << endl; // outputs 2; cout << ex2.get_sqrt() << endl; // outputs 3; // the following two blocks should implement ex2 = ex1; // save ex1 to archive { std::ofstream ofs("filename"); boost::archive::text_oarchive oa(ofs); oa << ex1; } // read it back into ex2 { std::ifstream ifs("filename"); boost::archive::text_iarchive ia(ifs); ia >> ex2; } // these should be equal now, but aren't, // since Example::serialize() doesn't modify num_sqrt cout << ex1.get_sqrt() << endl; // outputs 2; cout << ex2.get_sqrt() << endl; // outputs 3; return 0; } I'm sure this issue has come up with others, but I have struggled to find any documentation on this particular scenario. Thanks!

    Read the article

  • iPhone UITableView stutters with custom cells. How can I get it to scroll smoothly?

    - by Charles S.
    I'm using a UITableView to display custom cells created with Interface Builder. The table scrolling isn't very smooth (it "stutters") which leaves me to believe cells aren't being reused. Is there something wrong with this code? - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"TwitterCell"; TwitterCell *cell = (TwitterCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil){ //new cell NSArray *topLevelObjects = [[NSBundle mainBundle] loadNibNamed:@"TwitterCell" owner:nil options:nil]; for(id currentObject in topLevelObjects) { if([currentObject isKindOfClass:[TwitterCell class]]) { cell = (TwitterCell *)currentObject; break; } } } if([self.tweets count] > 0) { cell.lblUser.text = [[self.tweets objectAtIndex:[indexPath row]] username]; cell.lblTime.text = [[self.tweets objectAtIndex:[indexPath row]] time]; [cell.txtText setText:[[self.tweets objectAtIndex:[indexPath row]] text]]; [[cell imgImage] setImage:[[self.tweets objectAtIndex:[indexPath row]] image]]; } else { cell.txtText.text = @"Loading..."; } cell.selectionStyle = UITableViewCellSelectionStyleNone; return cell; }

    Read the article

  • Trouble creating a Java policy server for a simple Flash app

    - by simonwulf
    I'm trying to create a simple Flash chat application for educational purposes, but I'm stuck trying to send a policy file from my Java server to the Flash app (after several hours of googling with little luck). The policy file request reaches the server that sends a harcoded policy xml back to the app, but the Flash app doesn't seem to react to it at all until it gives me a security sandbox error. I'm loading the policy file using the following code in the client: Security.loadPolicyFile("xmlsocket://myhostname:" + PORT); The server recognizes the request as "<policy-file-request/" and responds by sending the following xml string to the client: public static final String POLICY_XML = "<?xml version=\"1.0\"?>" + "<cross-domain-policy>" + "<allow-access-from domain=\"*\" to-ports=\"*\" />" + "</cross-domain-policy>"; The code used to send it looks like this: try { _dataOut.write(PolicyServer.POLICY_XML + (char)0x00); _dataOut.flush(); System.out.println("Policy sent to client: " + PolicyServer.POLICY_XML); } catch (Exception e) { trace(e); } Did I mess something up with the xml or is there something else I might have overlooked?

    Read the article

  • NetBeans Tips and Tricks

    - by cdmckay
    I just saw an Eclipse tips & tricks post and was wondering if anyone had any tips & tricks for my IDE of choice: NetBeans. Here's a few I know and find to be useful: Removing a package: After you remove a package in NetBeans, it sticks around as a grayed-out package in your Project view. To get rid of that, switch to Files view and delete the directory. Alt-Insert (in Windows) opens up a Generate submenu at your cursor. A nice shortcut for quickly generating getters/setters (among other things). Selecting a chunk of code, right-clicking and then clicking "Refactor Introduce Method" will have NetBeans introduce a method, complete with arguments and return value. Of course you have to make sure the chunk of code only has one return value. Sometimes when you run a build and it crashes, the Java window sticks around at the bottom. I used to just click X until Windows let me End Task, but there's a nicer way to get rid of them. Click "Run Stop Build/Run" and NetBeans will close the window for you. It'll even let you close multiple applications at once. These may seem obvious to grizzled NetBeans developers, but I thought they might be useful for NetBeans newbs like me. Anyone else have any tips/tricks to share? Here are some from the comments: NetBeans allows for code templates. You can even add yours on the Code Templates tab under the Editor settings on the Options window. Some examples: Type sout and hit the tab key as a shorcut for System.out.println("") Type psvm and hit the tab key as a shorcut for public static void main(String args[]) {} Ctrl Shift C: Comments out the selected block of code. Alt Shift F: Formats the selected block of code. Ctrl E: Deletes current line. Ctrl Shift I: Fixes your imports, handy if you've just written a piece of code that needs a lot of packages imported.

    Read the article

  • NullReferenceException when initializing NServiceBus within web application Application_Start method

    - by SteveBering
    I am running the 2.0 RTM of NServiceBus and am getting a NullReferenceException when my MessageModule binds the CurrentSessionContext to my NHibernate sessionfactory. From within my Application_Start, I call the following method: public static void WithWeb(IUnityContainer container) { log4net.Config.XmlConfigurator.Configure(); var childContainer = container.CreateChildContainer(); childContainer.RegisterInstance<ISessionFactory>(NHibernateSession.SessionFactory); var bus = NServiceBus.Configure.WithWeb() .UnityBuilder(childContainer) .Log4Net() .XmlSerializer() .MsmqTransport() .IsTransactional(true) .PurgeOnStartup(false) .UnicastBus() .ImpersonateSender(false) .LoadMessageHandlers() .CreateBus(); var activeBus = bus.Start(); container.RegisterInstance(typeof(IBus), activeBus); } When the bus is started, my message module starts with the following: public void HandleBeginMessage() { try { CurrentSessionContext.Bind(_sessionFactory.OpenSession()); } catch (Exception e) { _log.Error("Error occurred in HandleBeginMessage of NHibernateMessageModule", e); throw; } } In looking at my log, we are logging the following error when the bind method is called: System.NullReferenceException: Object reference not set to an instance of an object. at NHibernate.Context.WebSessionContext.GetMap() at NHibernate.Context.MapBasedSessionContext.set_Session(ISession value) at NHibernate.Context.CurrentSessionContext.Bind(ISession session) Apparently, there is some issue in getting access to the HttpContext. Should this call to configure NServiceBus occur later in the lifecycle than Application_Start? Or is there another workaround that others have used to get handlers working within an Asp.NET Web application? Thanks, Steve

    Read the article

  • How to determine which inheriting class is using an abstract class' methods.

    - by Kin
    In my console application have an abstract Factory class "Listener" which contains code for listening and accepting connections, and spawning client classes. This class is inherited by two more classes (WorldListener, and MasterListener) that contain more protocol specific overrides and functions. I also have a helper class (ConsoleWrapper) which encapsulates and extends System.Console, containing methods for writing to console info on what is happening to instances of the WorldListener and MasterListener. I need a way to determine in the abstract ListenerClass which Inheriting class is calling its methods. Any help with this problem would be greatly appreciated! I am stumped :X Simplified example of what I am trying to do. abstract class Listener { public void DoSomething() { if(inheriting class == WorldListener) ConsoleWrapper.WorldWrite("Did something!"); if(inheriting class == MasterListener) ConsoleWrapper.MasterWrite("Did something!"); } } public static ConsoleWrapper { public void WorldWrite(string input) { System.Console.WriteLine("[World] {0}", input); } } public class WorldListener : Listener { public void DoSomethingSpecific() { ConsoleWrapper.WorldWrite("I did something specific!"); } } public void Main() { new WorldListener(); new MasterListener(); } Expected output [World] Did something! [World] I did something specific! [Master] Did something! [World] I did something specific!

    Read the article

  • create TableModel and populate jTable dynamically

    - by Julia
    Hi all! I want to store the results of reading lucene index into jTable, so that I can make it sortable by different columns. From index I am reading terms with different measures of their frequencies. Table columns are these : [string term][int absFrequency][int docFrequency][double invFrequency] So i in AbstractTableModel I can define column names, but i dont know how to get the Object[][]data with results from the following method: public static void FrequencyMap(Directory indexDir) throws Exception { List<Object>redoviLista = new ArrayList<Object>(); //final Map<String,TermRow> map = new TreeMap<String,TermRow>(); List<String>termList = new ArrayList<String>(); IndexReader iReader = IndexReader.open(indexDir); FilterIndexReader fReader = new FilterIndexReader(iReader); int numOfDocs = fReader.numDocs(); TermEnum terms = fReader.terms(); while (terms.next()){ Term term = terms.term(); String termText = term.text(); termList.add(termText); //Calculating the frequencies int df = iReader.docFreq(term); double idf = 0.0F; idf = Math.log10((double) numOfDocs / df); double tfidf = (df*idf); //Here comes important part Object oneTableRow[] = {termText, df, idf, tfidf}; redoviLista.add(jedanRed); // So i thaught to store them into list, and then later to copy into table, but didnt manage } iReader.close(); // So I need something like this, and i Neeed this array to be stored out of this method Object[][]data = new Object[redoviLista.size()][]; for (int i = 0; i < data.length; i++) { data[i][0] = redoviLista.get(i); } So I am kindda stuck here to proceed to implement AbstractTableModel and populate and display this table .... :/ Please help!

    Read the article

  • Getting DirectoryNotFoundException when trying to Connect to Device with CoreCon API

    - by ageektrapped
    I'm trying to use the CoreCon API in Visual Studio 2008 to programmatically launch device emulators. When I call device.Connect(), I inexplicably get a DirectoryNotFoundException. I get it if I try it in PowerShell or in C# Console Application. Here's the code I'm using: static void Main(string[] args) { DatastoreManager dm = new DatastoreManager(1033); Collection<Platform> platforms = dm.GetPlatforms(); foreach (var p in platforms) { Console.WriteLine("{0} {1}", p.Name, p.Id); } Platform platform = platforms[3]; Console.WriteLine("Selected {0}", platform.Name); Device device = platform.GetDevices()[0]; device.Connect(); Console.WriteLine("Device Connected"); SystemInfo info = device.GetSystemInfo(); Console.WriteLine("System OS Version:{0}.{1}.{2}", info.OSMajor, info.OSMinor, info.OSBuildNo); Console.ReadLine(); } My question: Does anyone know why I'm getting this error? I'm running this on WinXP 32-bit, plain jane Visual Studio 2008 Pro. I imagine it's some config issue since I can't do it from a Console app or PowerShell. Here's the stack trace as requested: System.IO.DirectoryNotFoundException was unhandled Message="The system cannot find the path specified.\r\n" Source="Device Connection Manager" StackTrace: at Microsoft.VisualStudio.DeviceConnectivity.Interop.ConManServerClass.ConnectDevice() at Microsoft.SmartDevice.Connectivity.Device.Connect() at ConsoleApplication1.Program.Main(String[] args) in C:\Documents and Settings\Thomas\Local Settings\Application Data\Temporary Projects\ConsoleApplication1\Program.cs:line 23 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • Nginx not responding to remote IP

    - by bucabay
    I just installed Nginx listening on 8083 I can get a HTTP response when sending a HTTP request from the local machine. eg: curl -i localhost:8083 However, when I do the same from a remote machine, it just hangs until the ssh times out, or when the browser times out if accessed from the browser. I pretty much have the default config: user apache apache; worker_processes 1; error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8083; server_name _; charset utf-8; #access_log logs/host.access.log main; location / { root html; index index.html index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ /\.ht { deny all; } } } any ideas?

    Read the article

  • Calculate car filled up times

    - by Ivan
    Here is the question: The driving distance between Perth and Adelaide is 1996 miles. On the average, the fuel consumption of a 2.0 litre 4 cylinder car is 8 litres per 100 kilometres. The fuel tank capacity of such a car is 60 litres. Design and implement a JAVA program that prompts for the fuel consumption and fuel tank capacity of the aforementioned car. The program then displays the minimum number of times the car’s fuel tank has to be filled up to drive from Perth to Adelaide. Note that 62 miles is equal to 100 kilometres. What data will you use to test that your algorithm works correctly? Here is what I've done so far: import java.util.Scanner;// public class Ex4{ public static void main( String args[] ){ Scanner input = new Scanner( System.in ); double distance, consumption, capacity, time; distance = Math.sqrt(1996/62*100); consumption = Math.sqrt(8/100); capacity = 60; time = Math.sqrt(distance*consumption/capacity); System.out.println("The car's fuel tank need to be filled up:" + time + "times"); } } I can compile it but the problem is that the result is always 0.0, can anyone help me what's wrong with it ?

    Read the article

  • Empty data problem - data layer or DAL?

    - by luckyluke
    I designing the new App now and giving the following question a lot of thought. I consume a lot of data from the warehouse, and the entities have a lot of dictionary based values (currency, country, tax-whatever data) - dimensions. I cannot be assured though that there won't be nulls. So I am thinking: create an empty value in each of teh dictionaries with special keyID - ie. -1 do the ETL (ssis) do the correct stuff and insert -1 where it needs to let the DAL know that -1 is special (Static const whatever thing) don't care in the code to check for nullness of dictionary entries because THEY will always have a value But maybe I should be thinking: import data AS IS let the DAL do the thinking using empty record Pattern still don't care in the code because business layer will have what it needs from DAL. I think is more of a approach thing but maybe i am missing something important here... What do You think? Am i clear? Please don't confuse it with empty record problem. I do use emptyCustomer think all the time and other defaults too.

    Read the article

  • Show RGB888 content

    - by Abhi
    Hi all! I have to show RGB888 content using the ShowRGBContent function. The below function is a ShowRGBContent function for yv12-rgb565 & UYVY-RGB565 static void ShowRGBContent(UINT8 * pImageBuf, INT32 width, INT32 height) { LogEntry(L"%d : In %s Function \r\n",++abhineet,WFUNCTION); UINT16 * temp; BYTE rValue, gValue, bValue; // this is to refresh the background desktop ShowWindow(GetDesktopWindow(),SW_HIDE); ShowWindow(GetDesktopWindow(),SW_SHOW); for(int i=0; i<height; i++) { for (int j=0; j< width; j++) { temp = (UINT16 *) (pImageBuf+ i*width*PP_TEST_FRAME_BPP+j*PP_TEST_FRAME_BPP); bValue = (BYTE) ((*temp & RGB_COMPONET0_MASK) >> RGB_COMPONET0_OFFSET) << (8 -RGB_COMPONET0_WIDTH); gValue = (BYTE) ((*temp & RGB_COMPONET1_MASK) >> RGB_COMPONET1_OFFSET) << (8 -RGB_COMPONET1_WIDTH); rValue = (BYTE) ((*temp & RGB_COMPONET2_MASK) >> RGB_COMPONET2_OFFSET) << (8 -RGB_COMPONET2_WIDTH); SetPixel(g_hDisplay, SCREEN_OFFSET_X + j, SCREEN_OFFSET_Y+i, RGB(rValue, gValue, bValue)); } } Sleep(2000); //sleep here to review the result LogEntry(L"%d :Out %s Function \r\n",++abhineet,__WFUNCTION__); } I have to modify this for RGB888 Here in the above function: ************************ RGB_COMPONET0_WIDTH = 5 RGB_COMPONET1_WIDTH = 6 RGB_COMPONET2_WIDTH = 5 ************************ ************************ RGB_COMPONET0_MASK = 0x001F //31 in decimal RGB_COMPONET1_MASK = 0x07E0 //2016 in decimal RGB_COMPONET2_MASK = 0xF800 //63488 in decimal ************************ ************************ RGB_COMPONET0_OFFSET = 0 RGB_COMPONET1_OFFSET = 5 RGB_COMPONET2_OFFSET = 11 ************************ Also PP_TEST_FRAME_BPP = 2 for yv12 -> RGB565 & UYVY -> RGB565 Now my task is for RGB888. Please guide me what shall i do in this. Thanks in advance.

    Read the article

  • Java's Scanner class: using left- and right buttons with Bash

    - by Bart K.
    I'm not too familiar with Linux/Bash, so I can't really find the right terms to search for. Take the snippet: public class Main { public static void main(String[] args) { java.util.Scanner keyboard = new java.util.Scanner(System.in); while(true) { System.out.print("$ "); String in = keyboard.nextLine(); if(in.equals("q")) break; System.out.println(" "+in); } } } If I run it on my Linux box using Bash, I can't use any of the arrow buttons (I'm only interested in the left- and right button, btw). For example, if I type "test" and then try to go back by pressing the left button, ^[[D appears instead of my cursor going back one place: $ test^[[D I've tried the newer Console class as well, but the end result is the same. On Windows' cmd.exe shell, I don't have this problem. So, the question is: is there a way to change my Java code so that I can use the arrow keys without Bash transforming them in sequences like ^[[D but actually move the cursor instead? I'm hoping that I can solve this on a "programming level". If this is not possible, then I guess I'd better try my luck on Superuser to see if there's something I need to change on my Bash console. Thanks in advance.

    Read the article

  • Problem inserting android.text.format.Time.toMillis value into SQLite DB on droid

    - by schusselig
    I'm writing an app for Android OS, and I need to store some time values in the SQLite DB. I have been using android.text.format.Time to store the time values in the app, and then inserting the values as millis into the DB as REAL values. On the SDK emulator, everything works perfectly. On the sole phone I've had the opportunity to test my app (so far), my duration code doesn't work as expected. Some relevant code: private static final String DATABASE_CREATE = "create table " + DATABASE_TABLE + " (" + KEY_ROWID + " integer primary key autoincrement, " + KEY_START + " REAL, " + KEY_STOP + " REAL, " + KEY_DUR + " REAL );"; ... private SQLiteDatabase mDb; ContentValues timerValues = new ContentValues(); ... timerValues.put(KEY_START, stime.toMillis(false)); timerValues.put(KEY_STOP, etime.toMillis(false)); timerValues.put(KEY_DURATION, stime.toMillis(false)-etime.toMillis(false)); int result = mDb.insert(DATABASE_TABLE, null, timerValues); I pull this data from two separate functions with slightly different bits of code, both using Time.set(long millis), both giving incorrect results: The start and stop values come back correct, but the duration comes out 17 hours too large. Am I missing something about calculating durations or does this just seem like there's something "special" about this particular droid? I'll have another droid to test on Monday, but any ideas are appreciated.

    Read the article

  • CSRF Protection in AJAX Requests using MVC2

    - by mnemosyn
    The page I'm building depends heavily on AJAX. Basically, there is just one "page" and every data transfer is handled via AJAX. Since overoptimistic caching on the browser side leads to strange problems (data not reloaded), I have to perform all requests (also reads) using POST - that forces a reload. Now I want to prevent the page against CSRF. With form submission, using Html.AntiForgeryToken() works neatly, but in AJAX-request, I guess I will have to append the token manually? Is there anything out-of-the box available? My current attempt looks like this: I'd love to reuse the existing magic. However, HtmlHelper.GetAntiForgeryTokenAndSetCookie is private and I don't want to hack around in MVC. The other option is to write an extension like public static string PlainAntiForgeryToken(this HtmlHelper helper) { // extract the actual field value from the hidden input return helper.AntiForgeryToken().DoSomeHackyStringActions(); } which is somewhat hacky and leaves the bigger problem unsolved: How to verify that token? The default verification implementation is internal and hard-coded against using form fields. I tried to write a slightly modified ValidateAntiForgeryTokenAttribute, but it uses an AntiForgeryDataSerializer which is private and I really didn't want to copy that, too. At this point it seems to be easier to come up with a homegrown solution, but that is really duplicate code. Any suggestions how to do this the smart way? Am I missing something completely obvious?

    Read the article

  • Help with Exception Handling in ASP.NET C# Application

    - by Shrewd Demon
    hi, yesterday i posted a question regarding the Exception Handling technique, but i did'nt quite get a precise answer, partly because my question must not have been precise. So i will ask it more precisely. There is a method in my BLL for authenticating user. If a user is authenticated it returns me the instance of the User class which i store in the session object for further references. the method looks something like this... public static UsersEnt LoadUserInfo(string email) { SqlDataReader reader = null; UsersEnt user = null; using (ConnectionManager cm = new ConnectionManager()) { SqlParameter[] parameters = new SqlParameter[1]; parameters[0] = new SqlParameter("@Email", email); try { reader = SQLHelper.ExecuteReader(cm.Connection, "sp_LoadUserInfo", parameters); } catch (SqlException ex) { //this gives me a error object } if (reader.Read()) user = new UsersDF(reader); } return user; } now my problem is suppose if the SP does not exist, then it will throw me an error or any other SQLException for that matter. Since this method is being called from my aspx.cs page i want to return some meaning full message as to what could have gone wrong so that the user understands that there was some problem and that he/she should retry logging-in again. but i can't because the method returns an instance of the User class, so how can i return a message instead ?? i hope i made it clear ! thank you.

    Read the article

  • GUID to ByteArray

    - by DutrowLLC
    I just wrote this code to turn a GUID into a byte array. Can anyone shoot any holes in it or suggest something better? public static byte[] getGuidAsByteArray(){ UUID uuid = UUID.randomUUID(); long longOne = uuid.getMostSignificantBits(); long longTwo = uuid.getLeastSignificantBits(); return new byte[] { (byte)(longOne >>> 56), (byte)(longOne >>> 48), (byte)(longOne >>> 40), (byte)(longOne >>> 32), (byte)(longOne >>> 24), (byte)(longOne >>> 16), (byte)(longOne >>> 8), (byte) longOne, (byte)(longTwo >>> 56), (byte)(longTwo >>> 48), (byte)(longTwo >>> 40), (byte)(longTwo >>> 32), (byte)(longTwo >>> 24), (byte)(longTwo >>> 16), (byte)(longTwo >>> 8), (byte) longTwo }; }

    Read the article

  • wpf exit thread automatically when application closes

    - by toni
    Hi, I have a main wpf window and one of its controls is a user control that I have created. this user control is an analog clock and contains a thread that update hour, minute and second hands. Initially it wasn't a thread, it was a timer event that updated the hour, minutes and seconds but I have changed it to a thread because the application do some hard work when the user press a start button and then the clock don't update so I changed it to a thread. COde snippet of wpf window: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:GParts" xmlns:Microsoft_Windows_Themes="clr-namespace:Microsoft.Windows.Themes assembly=PresentationFramework.Aero" xmlns:UC="clr-namespace:GParts.UserControls" x:Class="GParts.WinMain" Title="GParts" WindowState="Maximized" Closing="Window_Closing" Icon="/Resources/Calendar-clock.png" x:Name="WMain" > <...> <!-- this is my user control --> <UC:AnalogClock Grid.Row="1" x:Name="AnalogClock" Background="Transparent" Margin="0" Height="Auto" Width="Auto"/> <...> </Window> My problem is when the user exits the application then the thread seems to continue executing. I would like the thread finishes automatically when main windows closes. code snippet of user control constructor: namespace GParts.UserControls { /// <summary> /// Lógica de interacción para AnalogClock.xaml /// </summary> public partial class AnalogClock : UserControl { System.Timers.Timer timer = new System.Timers.Timer(1000); public AnalogClock() { InitializeComponent(); MDCalendar mdCalendar = new MDCalendar(); DateTime date = DateTime.Now; TimeZone time = TimeZone.CurrentTimeZone; TimeSpan difference = time.GetUtcOffset(date); uint currentTime = mdCalendar.Time() + (uint)difference.TotalSeconds; christianityCalendar.Content = mdCalendar.Date("d/e/Z", currentTime, false); // this was before implementing thread //timer.Elapsed += new System.Timers.ElapsedEventHandler(timer_Elapsed); //timer.Enabled = true; // The Work to perform ThreadStart start = delegate() { // With this condition the thread exits when main window closes but // despite of this it seems like the thread continues executing after // exiting application because in task manager cpu is very busy // while ((this.IsInitialized) && (this.Dispatcher.HasShutdownFinished== false)) { this.Dispatcher.Invoke(DispatcherPriority.Normal, (Action)(() => { DateTime hora = DateTime.Now; secondHand.Angle = hora.Second * 6; minuteHand.Angle = hora.Minute * 6; hourHand.Angle = (hora.Hour * 30) + (hora.Minute * 0.5); DigitalClock.CurrentTime = hora; })); } Console.Write("Quit ok"); }; // Create the thread and kick it started! new Thread(start).Start(); } // this was before implementing thread void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e) { this.Dispatcher.Invoke(DispatcherPriority.Normal, (Action)(() => { DateTime hora = DateTime.Now; secondHand.Angle = hora.Second * 6; minuteHand.Angle = hora.Minute * 6; hourHand.Angle = (hora.Hour * 30) + (hora.Minute * 0.5); DigitalClock.CurrentTime = hora; })); } } // end class } // end namespace How can I exit correctly from thread automatically when main window closes and then application exits? Thanks very much!

    Read the article

  • How do I avoid repetition in Java ResourceBundle strings?

    - by Trejkaz
    We had a lot of strings which contained the same sub-string, from sentences about checking the log or how to contact support, to branding-like strings containing the company or product name. The repetition was causing a few issues for ourselves (primarily typos or copy/paste errors) but it also causes issues in that it increases the amount of text our translator has to translate. The solution I came up with went something like this: public class ExpandingResourceBundleControl extends ResourceBundle.Control { public static final ResourceBundle.Control EXPANDING = new ExpandingResourceBundleControl(); private ExpandingResourceBundleControl() { } @Override public ResourceBundle newBundle(String baseName, Locale locale, String format, ClassLoader loader, boolean reload) throws IllegalAccessException, InstantiationException, IOException { ResourceBundle inner = super.newBundle(baseName, locale, format, loader, reload); return inner == null ? null : new ExpandingResourceBundle(inner, loader); } } ExpandingResourceBundle delegates to the real resource bundle but performs conversion of {{this.kind.of.thing}} to look up the key in the resources. Every time you want to get one of these, you have to go: ResourceBundle.getBundle("com/acme/app/Bundle", EXPANDING); And this works fine -- for a while. What eventually happens is that some new code (in our case autogenerated code which was spat out of Matisse) looks up the same resource bundle without specifying the custom control. This appears to be non-reproducible if you write a simple unit test which calls it with and then without, but it occurs when the application is run for real. Somehow the cache inside ResourceBundle ejects the good value and replaces it with the broken one. I am yet to figure out why and Sun's jar files were compiled without debug info so debugging it is a chore. My questions: Is there some way of globally setting the default ResourceBundle.Control that I might not be aware of? That would solve everything rather elegantly. Is there some other way of handling this kind of thing elegantly, perhaps without tampering with the ResourceBundle classes at all?

    Read the article

  • Why ClassCastException on JMS ConnectionFactory lookup in JNDI?

    - by Derek Mahar
    What might be the cause of the following ClassCastException in a standalone JMS client application when it attempts to retrieve a connection factory from the JNDI provider? Exception in thread "main" java.lang.ClassCastException: javax.naming.Reference cannot be cast to javax.jms.ConnectionFactory Here is an abbreviated version of the JMS client that includes only its start() and stop() methods. The exception occurs on the first line in method start() which attempts to retrieve the connection factory from the JNDI provider, a remote LDAP server. The JMS connection factory and destination objects are on a remote JMS server. class JmsClient { private ConnectionFactory connectionFactory; private Connection connection; private Session session; private MessageConsumer consumer; private Topic topic; public void stop() throws JMSException { consumer.close(); session.close(); connection.close(); } public void start(Context context, String connectionFactoryName, String topicName) throws NamingException, JMSException { // ClassCastException occurs when retrieving connection factory. connectionFactory = (ConnectionFactory) context.lookup(connectionFactoryName); connection = connectionFactory.createConnection("username","password"); session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); topic = (Topic) context.lookup(topicName); consumer = session.createConsumer(topic); connection.start(); } private static Context getInitialContext() throws NamingException, IOException { String filename = "context.properties"; Properties props = new Properties(); props.load(new FileInputStream(filename)); return new InitialContext(props); } }

    Read the article

  • Java to JavaScript (Encryption related)

    - by balexandre
    Hi guys, I'm having difficulties to get the same string in Javascript and I'm thinking that I'm doing something wrong... Java code: import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.Date; import java.util.GregorianCalendar; import sun.misc.BASE64Encoder; private static String getBase64Code(String input) throws UnsupportedEncodingException, NoSuchAlgorithmException { String base64 = ""; byte[] txt = input.getBytes("UTF8"); byte[] text = new byte[txt.length+3]; text[0] = (byte)239; text[1] = (byte)187; text[2] = (byte)191; for(int i=0; i<txt.length; i++) text[i+3] = txt[i]; MessageDigest md = MessageDigest.getInstance("MD5"); md.update(text); byte digest[] = md.digest(); BASE64Encoder encoder = new BASE64Encoder(); base64 = encoder.encode(digest); return base64; } I'm trying this using Paj's MD5 script as well Farhadi Base 64 Encode script but my tests fail completely :( my code: function CalculateCredentialsSecret(type, user, pwd) { var days = days_between(new Date(), new Date(2000, 1, 1)); var str = type.toUpperCase() + user.toUpperCase() + pwd.toUpperCase() + days; var md5 = hex_md5(str); var b64 = base64Encode(md5); return encodeURIComponent(b64); } Does anyone know how can I convert this Java method into a Javascript one? Thank you Tests (for today, 3740 days after January 1st, 2000 var secret = CalculateCredentialsSecret('AAA', 'BBB', 'CCC'); // secret SHOULD be: S3GYAfGWlmrhuoNsIJF94w==

    Read the article

  • Find max integer size that a floating point type can handle without loss of precision

    - by Checkers
    Double has range more than a 64-bit integer, but its precision is less dues to its representation (since double is 64-bit as well, it can't fit more actual values). So, when representing larger integers, you start to lose precision in the integer part. #include <boost/cstdint.hpp> #include <limits> template<typename T, typename TFloat> void maxint_to_double() { T i = std::numeric_limits<T>::max(); TFloat d = i; std::cout << std::fixed << i << std::endl << d << std::endl; } int main() { maxint_to_double<int, double>(); maxint_to_double<boost::intmax_t, double>(); maxint_to_double<int, float>(); return 0; } This prints: 2147483647 2147483647.000000 9223372036854775807 9223372036854775800.000000 2147483647 2147483648.000000 Note how max int can fit into a double without loss of precision and boost::intmax_t (64-bit in this case) cannot. float can't even hold an int. Now, the question: is there a way in C++ to check if the entire range of a given integer type can fit into a loating point type without loss of precision? Preferably, it would be a compile-time check that can be used in a static assertion, and would not involve enumerating the constants the compiler should know or can compute.

    Read the article

  • JVM terminates when launching eclipse with J2SE 6.0 on mac os x (need J2SE 6.0 for Oracle enterprise

    - by rooban bajwa
    I know my issue has party been addressed at this link http://stackoverflow.com/questions/245803/jvm-terminates-when-launching-eclipse-mat-on-mac-os-with-j2se-60 but it was a year+ ago.. plus the link that's provided in there http://landonf.bikemonkey.org/static/soylatte/ does not seem to be alive (i mean the download section on that link no longer provide the 32-bit port of j2se 6.0 for mac osx 10.5) I am trying to run eclipse 3.5 on mac OSX 10.5. It works fine with J2SE 5.0. But when I installed the Oracle enterprise pack for eclipse - it requires to start eclipse with J2SE 6.0 JVM otherwise it will get disabled. Here's the exact message I get from it - "You are running Eclipse on Java VM version: 1.5.0_22 Oracle Enterprise Pack for Eclipse requires Java version 6 or higher. Click next to configure a compatible Java VM." It asks me to point to J2SE 6.0 JVM, when I do that (i.e point it to "/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home") , it asks to restart eclipse , when I do that, eclipse just bombs .. with JVM terminated error .. SO I need to start eclipse with J2SE 6.0 JVM but eclipse needs carbon which is only available in 32 bits and hence I cann't start eclipse with J2SE 6.0 JVM which is only available in 64bit mode from mac. And the site providing 32 bit port of J2SE 6.0 JVM does not seem to be active anymore.. Can someone help me on this issue, Thanks in advance,

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >