Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 515/617 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Servlet that starts a thread only once for every visitor

    - by user858749
    Hey I want to implement a Java Servlet that starts a thread only once for every single user. Even on refresh it should not start again. My last approach brought me some trouble so no code^^. Any Suggestions for the layout of the servlet? public class LoaderServlet extends HttpServlet { // The thread to load the needed information private LoaderThread loader; // The last.fm account private String lfmaccount; public LoaderServlet() { super(); lfmaccount = ""; } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if (loader != null) { response.setContentType("text/plain"); response.setHeader("Cache-Control", "no-cache"); PrintWriter out = response.getWriter(); out.write(loader.getStatus()); out.flush(); out.close(); } else { loader = new LoaderThread(lfmaccount); loader.start(); request.getRequestDispatcher("WEB-INF/pages/loader.jsp").forward( request, response); } } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if (lfmaccount.isEmpty()) { lfmaccount = request.getSession().getAttribute("lfmUser") .toString(); } request.getRequestDispatcher("WEB-INF/pages/loader.jsp").forward( request, response); } } The jsp uses ajax to regularly post to the servlet and get the status. The thread just runs like 3 minutes, crawling some last.fm data.

    Read the article

  • Applying MVVM to ASP.NET

    - by Moussa
    Hi everyone, I am learning MVVM and i want to use it with ASP.NET. Some of the examples that i found on the internet uses XAML for the view. Is there a way to use a regular ASP.NET page instead of XAML for the view? Here is a XAML example: <UserControl x:Class="MVVMExample.DetailView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Grid x:Name="LayoutRoot" Background="White" DataContext="{Binding CurrentContact}"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <TextBlock Text="Name:" HorizontalAlignment="Right" Margin="5"/> <TextBlock Text="{Binding FullName}" HorizontalAlignment="Left" Margin="5" Grid.Column="1"/> <TextBlock Text="Phone:" HorizontalAlignment="Right" Margin="5" Grid.Row="1"/> <TextBlock Text="{Binding PhoneNumber}" HorizontalAlignment="Left" Margin="5" Grid.Row="1" Grid.Column="1"/> </Grid> </UserControl> Thank you for your time.

    Read the article

  • Why does Eclipse skip lines when I debug JBoss?

    - by ZeKoU
    Hi, I am trying to debug web service call which uses JMS in the background.I have JBoss running in debug mode. What happens is that when I press F6 in Eclipse (to execute current line) it skips certain lines. I have this method: @Override public void log(MsgPayload payload) { 1 Date startTime = new Date(); logger.info("Publishing with BufferedPublisher.java start time:"+startTime); 3 publisher.send(payload); Date endTime = new Date(); logger.info("Publishing with BufferedPublisher.java end time:"+endTime); long mills = endTime.getTime()-endTime.getTime(); double secs = mills/1000.0; logger.info("Publishing with BufferedPublisher.java total time (seconds):"+secs); } So what happens? I have breakpoint at line 1. When I press F6 it skips that line and goes to line 3. When I press F6 again it goes to the end of the method. Half of the code is never executed..??? My question is why. I am assuming my source is not well attached to the real code that is being executed.But how do I change this? Thanks.

    Read the article

  • SpringFramework3.0: How to create interceptors that only apply to requests that map to certain contr

    - by Fusion2004
    In it's simplest form, I want an interceptor that checks session data to see if a user is logged in, and if not redirects them to the login page. Obviously, I wouldn't want this interceptor to be used on say the welcome page or the login page itself. I've seen a design that uses a listing of every url to one of two interceptors, one doing nothing and the other being the actual interceptor you want implemented, but this design seems very clunky and limits the ease of extensibility of the application. It makes sense to me that there should be an annotation-based way of using interceptors, but this doesn't seem to exist. My friend has the idea of actually modifying the handler class so that during each request it checks the Controller it is mapping the request to for a new annotation we would create (ex @Interceptor("loginInterceptor") ). A major point of my thinking is the extensibility, because I'd like to later implement similar interceptors for role-based authentication and/or administration authentication. Does it sound like my friend's approach would work for this? Or what is a proper way of going about doing this?

    Read the article

  • How can i split up a component using cfinclude and still use inheritance?

    - by rip747
    Note: this is just a simplized example of what i'm trying to do to get the idea across. The problem I'm having is that I want to use cfinclude inside cfcomponent so that i can group like methods into separate files for more manageability. The problem I'm running into is when i try to extend another component that also uses cfinclude to manage it's method as demostrated below. Note that ComponentA extends ComponentB: ComponentA ========== <cfcomponent output="false" extends="componentb"> <cfinclude template="componenta/methods.cfm"> </cfcomponent> componenta/methods.cfm ====================== <cffunction name="a"><cfreturn "componenta-a"></cffunction> <cffunction name="b"><cfreturn "componenta-b"></cffunction> <cffunction name="c"><cfreturn "componenta-c"></cffunction> <cffunction name="d"><cfreturn super.a()></cffunction> ComponentB ========== <cfcomponent output="false"> <cfinclude template="componentb/methods.cfm"> </cfcomponent> componentb/methods.cfm ====================== <cffunction name="a"><cfreturn "componentb-a"></cffunction> <cffunction name="b"><cfreturn "componentb-b"></cffunction> <cffunction name="c"><cfreturn "componentb-c"></cffunction> The issue is that when i try to initialize ComponentA I get an the error: "Routines cannot be declared more than once. The routine a has been declared twice in different templates." The whole reason for this is because when you use cfinclude it's evaluated at RUN TIME instead of COMPILE TIME. Short of moving the methods into the components themselves and eliminating the use of cfinclude, how can i get around this or does someone have a better idea splitting up large components?

    Read the article

  • Dismiss android activity displayed as Popup

    - by Sit
    So i have a service, that starts an activity displayed as a Popup thank's to "android:style/Theme.Dialog" This Activity shows a Listview, with a list of application. On each element of the listview, there is a short description of the application, and two buttons. 1 for launching the application 2 for displaying a toast with more informations. Here is the code in my service : it starts the activity Intent intent = new Intent(this, PopUpActivity.class); intent.addFlags(Intent.FLAG_DEBUG_LOG_RESOLUTION); intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); getApplicationContext().startActivity(intent); this activity uses a custom layout, with a listview, adapted with a custom ArrayAdapter In this adaptater, i've put an action on the start button in order to start the current application Button lanceur = (Button) v.findViewById(R.id.Buttonlancer); lanceur.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { p.start(mcontext); } }); with p.start, i start the application. But now, if i press "back" from the application, i go back to the popup... and i can start another application. I don't want it to be possible. That's why i wish i could dismiss/destroy/finish my PopupActivity, but i can't manage to do it with the code i have.

    Read the article

  • What is this: main:for(...){...} doing?

    - by David Murdoch
    I pulled up the NWmatcher source code for some light morning reading and noticed this odd bit of code I'd never seen in javascript before: main:for(/*irrelevant loop stuff*/){/*...*/} This snippet can be found in the compileGroup method on line 441 (nwmatcher-1.1.1) return new Function('c,s,d,h', 'var k,e,r,n,C,N,T,X=0,x=0;main:for(k=0,r=[];e=N=c[k];k++){' + SKIP_COMMENTS + source + '}return r;' ); Now I figured out what main: is doing on my own. If you have a loop within a loop and want to skip to the next iteration of the outer loop (without completing the inner OR the outer loop) you can execute continue main. Example: // This is obviously not the optimal way to find primes... function getPrimes(max) { var primes = [2], //seed sqrt = Math.sqrt, i = 3, j, s; outer: for (; i <= max; s = sqrt(i += 2)) { j = 3; while (j <= s) { if (i % j === 0) { // if we get here j += 2 and primes.push(i) are // not executed for the current iteration of i continue outer; } j += 2; } primes.push(i); } return primes; } What is this called? Are there any browsers that don't support it? Are there other uses for it other than continue?

    Read the article

  • PHP loop hanging/interspersed/threaded through HTML

    - by sandyv
    I can't figure out how to say what I'm talking about which is a big part of why I'm stuck. In PHP I often see code like this html <?php language construct that uses brackets { some code; ?> more html <?php some more code; } ?> rest of html Is there any name for this? Having seen this lead me to try it out so here is a concrete example whose behavior doesn't make sense to me <div id="content"> <ul id="nav"> <?php $path = 'content'; $dir = dir($path); while(false !== ($file = $dir->read())) { if(preg_match('/.+\.txt/i', $file)) { echo "<li>$file</li>"; ?> </ul> <?php echo file_get_contents($path . '/' . $file); } } ?> </div> Which outputs roughly <div><ul><li></li></ul><li></li>...</div> instead of <div><ul><li></li>...</ul></div> which is what I thought would happen and what I want to happen.

    Read the article

  • In need of a Smarter Environmental Package Configuration

    - by Jeremy Liberman
    I am trying to set up a package template in SSIS, following the Wrox Programmer to Programmer book, SQL Server 2008 Integration Services: Problem - Design - Solution. I'm really liking this book even though it is 2008 and we're using SQL Server 2005. I've got a working package template that uses an Indirect XML package configuration to identify what environment (local developer, dev, QA, production, etc) the package is being run in. That locates the SQL Server package configuration for the environment. That set-up is great and all except for the environment variable at the very front of it all. My team would prefer it if the package could use the same environment resource locator as all our other applications and tools use, so we don't two environment markers with essentially the same information in them. Normally we look up a registry key in HKey_Local_Machine but the Registry Package Configuration type only lets you look up the HKey_Current_User registries. My first thought was to write a new Package Configuration Type class that extends the Registry type; after all we'd had such luck writing our own custom log provider. SSIS is super extendable, right? So there doesn't seem to be a way to write your own Package Configuration Types. Is there still some way I can configure my SSIS SQL Server package configuration from a HKLM registry key connection string? If this is not possible, what other workarounds are available? My idea is to write a PowerShell script that will create/modify the Environment Variable that the package will use by fetching the connection string from the registry. This way there's still two markers, but at least then it's automatically maintained and automated. Is this kind of workaround necessary? Thank you for your time.

    Read the article

  • how to emulate thread local storage at user space in C++ ?

    - by vprajan
    I am working on a mobile platform over Nucleus RTOS. It uses Nucleus Threading system but it doesn't have support for explicit thread local storage i.e, TlsAlloc, TlsSetValue, TlsGetValue, TlsFree APIs. The platform doesn't have user space pthreads as well. I found that __thread storage modifier is present in most of the C++ compilers. But i don't know how to make it work for my kind of usage. How does __thread keyword can be mapped with explicit thread local storage? I read many articles but nothing is so clear for giving me the following basic information will __thread variable different for each thread ? How to write to that and read from it ? does each thread has exactly one copy of the variable ? following is the pthread based implementation: pthread_key_t m_key; struct Data : Noncopyable { Data(T* value, void* owner) : value(value), owner(owner) {} int* value; }; inline ThreadSpecific() { int error = pthread_key_create(&m_key, destroy); if (error) CRASH(); } inline ~ThreadSpecific() { pthread_key_delete(m_key); // Does not invoke destructor functions. } inline T* get() { Data* data = static_cast<Data*>(pthread_getspecific(m_key)); return data ? data->value : 0; } inline void set(T* ptr) { ASSERT(!get()); pthread_setspecific(m_key, new Data(ptr, this)); } How to make the above code use __thread way to set & get specific value ? where/when does the create & delete happen? If this is not possible, how to write custom pthread_setspecific, pthread_getspecific kind of APIs. I tried using a C++ global map and index it uniquely for each thread and retrieved data from it. But it didn't work well.

    Read the article

  • Cross-Application User Authentication

    - by Chris Lieb
    We have a webapp written in .NET that uses NTLM for SSO. We are writing a new webapp in Java that will tightly integrate with the original application. Unfortunately, Java has no support for performing the server portion of NTLM authentication and the only library that I can find requires too much setup to be allowed by IT. To work around this, I came up with a remote authentication scheme to work across applications and would like your opinions on it. It does not need to be extremely secure, but at the same time not easily be broken. User is authenticated into .NET application using NTLM User clicks link that leaves .NET application .NET application generates random number and stores it in the user table along with the user's full username (domain\username) Insecure token is formed as random number:username Insecure token is run through secure cipher (likely AES-256) using pre-shared key stored within the application to produce a secure token The secure token is passed as part of the query string to the Java application The Java application decrypts the secure key using the same pre-shared key stored within its own code to get the insecure token The random number and username are split apart The username is used to retrieve the user's information from the user table and the stored random number is checked against the one pulled from the insecure token If the numbers match, the username is put into the session for the user and they are now authenticated If the numbers do not match, the user is redirected to the .NET application's home page The random number is removed from the database

    Read the article

  • local vs core contoller

    - by latvian
    Hi, I am adding new column and action in the local admin app/code/local/Mage/Adminhtml/Block/Catalog/Product/Grid.php which works fine, however. The local controller/app/code/local/Mage/Adminhtml/Block/Catalog/Product.php is not being used or is not overloading the admin one /app/code/core/Mage/Adminhtml/Block/Catalog/Product.php. This is almost fresh install of Magento 1.4.0.1. I am the only one working, so i know it is not overloaded by some custom controller. I have disabled all custom modules. I have rolled back most of my changes. I have checked /etc/Modules/Mage_Catalog.xml. Refreshed cache all possible ways, loged in and out. Nothing....still using the core contoller copy. why? How do you troubleshoot, meaning, at what moment magento decides using between core or local copies? ...its even more strange because it does not parse local Adminhtml config.xml but uses local Adminthml copy of Blocks. Any pointer would help. I would like to keep everything in local code. Thank You, Margots

    Read the article

  • How do I "rebind" the click event after unbind('click') ?

    - by Ben
    I have an anchor tag <a class="next">next</a> made into a "button". Sometimes, this tag needs to be hidden if there is nothing new to show. All works fine if I simply hide the button with .hide() and re-display it with .show(). But I wanted to uses .fadeIn() and .fadeOut() instead. The problem I'm having is that if the user clicks on the button during the fadeOut animation, it can cause problems with the logic I have running the show. The solution I found was to unbind the click event from the button after the original click function begins, and then re-bind it after the animation is complete. $('a.next').click(function() { $(this).unbind('click'); ... // calls some functions, one of which fades out the a.next if needed ... $(this).bind('click'); } the last part of the above example does not work. The click event is not actually re-bound to the anchor. does anyone know the correct way to accomplish this? I'm a self-taught jquery guy, so some of the higher level things like unbind() and bind() are over my head, and the jquery documentation isn't really simple enough for me to understand.

    Read the article

  • How to optimize paging for large in memory database

    - by snakefoot
    I have an application where the entire database is implemented in memory using a stl-map for each table in the database. Each item in the stl-map is a complex object with references to other items in the other stl-maps. The application works with a large amount of data, so it uses more than 500 MByte RAM. Clients are able to contact the application and get a filtered version of the entire database. This is done by running through the entire database, and finding items relevant for the client. When the application have been running for an hour or so, then Windows 2003 SP2 starts to page out parts of the RAM for the application (Eventhough there is 16 GByte RAM on the machine). After the application have been partly paged out then a client logon takes a long time (10 mins) because it now generates a page fault for each pointer lookup in the stl-map. I can see it is possible to tell Windows to lock memory in RAM, but this is generally only recommended for device drivers, and only for "small" amounts of memory. I guess a poor mans solution could be to loop through the entire memory database, and thus tell Windows we are still interested in keeping the datamodel in RAM. I guess another poor mans solution could be to disable the pagefile completely on Windows. I guess the expensive solution would be a SQL database, and then rewrite the entire application to use a database layer. Then hopefully the database system will have implemented means to for fast access. Are there other more elegant solutions ?

    Read the article

  • Upgrade to iPhone 3.0 sdk and now simulator shows blank screen.

    - by NoShitMcGee
    I have an iPhone app that uses an UITabBarController, which contains two UINavigationControllers, each of which in turn contains one or more TableViewControllers (actually, customized UIViewControllers implementing UITableViewDelegate and UITableViewDataSource. ) On launch, it displays the UITabBarController with one of the tableviews displayed. Everything is coded; Interface Builder was NOT used to make any of the UI stuff. It was written in SDK 2. It worked fine in sdk 2. I recently updated to SDK 3.0. In Info, I set the Base SDK setting to iPhone Simulator 3.0. Now, when I launch the application in Simulator, I see only a blank white screen with the status bar at the top. No signs of my app. However, when I exit the app, the missing tableview displays briefly as the exiting animation is playing. Also, on the blank white screen I can still click where the navigation buttons should be and find that, when I exit the app and the missing screen briefly displays, that navigation has taken me to another screen. So the buttons work, and presumably the tableviewcells are there, they just cannot be seen. Has anyone seen anything like this? Does anyone have any idea what is causing it and how I can fix it? I noticed that sample apps, such as SQLiteBooks, seem to work fine when updating to SDK 3.0. My app isn't very much different from SQLiteBooks in terms of technologies used, except that, as I said above, I do not use Interface Builder. Thanks

    Read the article

  • Reduce file size for charts pasted from excel into word

    - by Steve Clanton
    I have been creating reports by copying some charts and data from an excel document into a word document. I am pasting into a content control, so i use ChartObject.CopyPicture in excel and ContentControl.Range.Paste in word. This is done in a loop: Set ws = ThisWorkbook.Worksheets("Charts") With ws For Each cc In wordDocument.ContentControls If cc.Range.InlineShapes.Count > 0 Then scaleHeight = cc.Range.InlineShapes(1).scaleHeight scaleWidth = cc.Range.InlineShapes(1).scaleWidth cc.Range.InlineShapes(1).Delete .ChartObjects(cc.Tag).CopyPicture Appearance:=xlScreen, Format:=xlPicture cc.Range.Paste cc.Range.InlineShapes(1).scaleHeight = scaleHeight cc.Range.InlineShapes(1).scaleWidth = scaleWidth ElseIf ... Next cc End With Creating these reports using Office 2007 yielded files that were around 6MB, but creating them (using the same worksheet and document) in Office 2010 yields a file that is around 10 times as large. After unzipping the docx, I found that the extra size comes from emf files that correspond to charts that are pasted in using VBA. Where they range from 360 to 900 KB before, they are 5-18 MB. And the graphics are not visibly better. I am able to CopyPicture with the format xlBitmap, and while that is somewhat smaller, it is larger than the emf generated by Office 2007 and noticeably poorer quality. Are there any other options for reducing the file size? Ideally, I would like to produce a file with the same resolution for the charts as I did using Office 2007. Is there any way that uses VBA only (without modifying the charts in the spreadsheet)?

    Read the article

  • How can I measure the time it takes a file to upload to a PHP script?

    - by Gustav Bertram
    I am trying to measure the time a script takes to upload a file to a PHP script, inside the receiving script. Here's a short example: <form action="#" enctype="multipart/form-data" method="post"> Upload: <input type="file" name="datafile" size="40"> <input type="submit" value="Send"> </form> <?php $upload_time = time() - $_SERVER['REQUEST_TIME']; echo $upload_time . " seconds."; I've submitted 4MB, 25MB, 100MB and 1.4GB files. They took anything from a fraction of a second, to almost a minute, yet the script above always yielded 0 seconds. NOTE: I know that this is a duplicate question, but the accepted answer on the other question did not work for me (at least, initially): Detect how long it takes for a file to upload (PHP) I've tried it using PHP 5.3.3 with Apache 2.2.16 on Ubuntu 10.4. UPDATE: I've discovered that both the Apache header and Ajax solutions work. In fact, the solution on the duplicate question also works. The critical element to making these solutions work is to ensure that the following values in the php.ini are sufficiently high: memory_limit = 1000M upload_max_filesize = 1000M post_max_size = 1000M I am accepting the Apache header solution, since it only uses PHP.

    Read the article

  • How do I make a function in SQL Server that accepts a column of data?

    - by brandon k
    I made the following function in SQL Server 2008 earlier this week that takes two parameters and uses them to select a column of "detail" records and returns them as a single varchar list of comma separated values. Now that I get to thinking about it, I would like to take this table and application-specific function and make it more generic. I am not well-versed in defining SQL functions, as this is my first. How can I change this function to accept a single "column" worth of data, so that I can use it in a more generic way? Instead of calling: SELECT ejc_concatFormDetails(formuid, categoryName) I would like to make it work like: SELECT concatColumnValues(SELECT someColumn FROM SomeTable) Here is my function definition: FUNCTION [DNet].[ejc_concatFormDetails](@formuid AS int, @category as VARCHAR(75)) RETURNS VARCHAR(1000) AS BEGIN DECLARE @returnData VARCHAR(1000) DECLARE @currentData VARCHAR(75) DECLARE dataCursor CURSOR FAST_FORWARD FOR SELECT data FROM DNet.ejc_FormDetails WHERE formuid = @formuid AND category = @category SET @returnData = '' OPEN dataCursor FETCH NEXT FROM dataCursor INTO @currentData WHILE (@@FETCH_STATUS = 0) BEGIN SET @returnData = @returnData + ', ' + @currentData FETCH NEXT FROM dataCursor INTO @currentData END CLOSE dataCursor DEALLOCATE dataCursor RETURN SUBSTRING(@returnData,3,1000) END As you can see, I am selecting the column data within my function and then looping over the results with a cursor to build my comma separated varchar. How can I alter this to accept a single parameter that is a result set and then access that result set with a cursor?

    Read the article

  • How to lazy process an xml documentwith hexpat?

    - by Florian
    In my search for a haskell library that can process large (300-1000mb) xml files i came across hexpat. There is an example in the Haskell Wiki that claims to -- Process document before handling error, so we get lazy processing. For testing purposes i have redirected the output to /dev/null and throw a 300mb file at it. Memory consumption kept rising until i had to kill the process. Now i removed the error handling from the process function: process :: String -> IO () process filename = do inputText <- L.readFile filename let (xml, mErr) = parse defaultParseOptions inputText :: (UNode String, Maybe XMLParseError) hFile <- openFile "/dev/null" WriteMode L.hPutStr hFile $ format xml hClose hFile return () As a result the function now uses constant memory. Why does the error handling result in massive memory consumption? As far as i understand xml and mErr are two seperate unevaluated thunks after the call to parse. Does format xml evaluate xml and build the evaluation tree of 'mErr'? If yes is there a way to handle the error while using constant memory? http://www.haskell.org/haskellwiki/Hexpat/

    Read the article

  • Android ACTION_BOOT_COMPLETED called everytime?

    - by user3976029
    I am trying to write a code in Android , to create a condition during booting but my condition satisfies everytime ( during booting as well as during running of the device also). I am trying to do is , to execute the condition during the booting only. My Code : MainActivity.java package com.example.bootingtest; import android.os.Bundle; import android.widget.Toast; import android.app.Activity; import android.content.Intent; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); if (Intent.ACTION_BOOT_COMPLETED!=null) { Toast.makeText(getApplicationContext(), "Device is booting ...", Toast.LENGTH_LONG).show(); } } } I have given manifest permission . <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> I want to execute this condition only during booting or device start-up but this condition satisfies every time , whenever i open the app. Please suggest me , how can i run the condition only during the device booting or start-up. Please help me out.

    Read the article

  • Move iFrame in the DOM using jQuery

    - by Josh
    My overall aim is to create an editor which I can skin using jQuery UI (by creating a custom toolbar which uses integration calls), using TinyMCE. Lets say I have a TinyMCE editor on a page. The actual editor is an iFrame contained inside a lot of horrible table code, which is also where the current (to be scrapped) toolbar is. I want just the iframe inside a div - ideally get rid of the table code. So...I want to transform: <table> <tr> <td><iframe id="xyz"></iframe></td> </tr> </table> into <div id="test"> <iframe id="xyz"></iframe> </div> So far, I've tried using: $('#xyz').clone(true).appendTo('#test'); Which clones the iframe, but no content inside it. Is there a way to make this work? If not, can I somehow strip the table code from around the iFrame away? If I cant do that, I'll think I'll have to keep the table code.

    Read the article

  • Accomodating Multiple DLL Versions

    - by shadeseeker
    I have an application that uses a Microsoft DLL (Microsoft.ComponentStudio.ComponentPlatformImplementation.dll) which is used for OS deployment and accessing the catalog files. Version 6.0.0.0 is specific to the Windows Server 2008 catalog files. The newer version 6.1.0.0 is specific to Windows Server 2008 R2 catalog files. Attempting to access a catalog file with the incorrect version results in an exception. My application (VB.NET using VS2005) needs to be able to access either version of these catalogs - I'd be happy with two executables (one for each catalog version) but obviously I don't want to maintain two sets of source code for each. Specifying both sets of DLLs in the project reference is not possible as the DLL names are identical. I'd rather not have to manually add and remove the DLL references each time I want to a build. As far as I know the interfaces etc are effectively identical between the two. I've read a few articles here and elsewhere about bindingRedirect, Assembly.Load etc but none seem to be bearing fruit. Any guidance on the best path to follow would be greatly appreciated. Thanks.

    Read the article

  • Is this linear search implementation actually useful?

    - by Helper Method
    In Matters Computational I found this interesting linear search implementation (it's actually my Java implementation ;-)): public static int linearSearch(int[] a, int key) { int high = a.length - 1; int tmp = a[high]; // put a sentinel at the end of the array a[high] = key; int i = 0; while (a[i] != key) { i++; } // restore original value a[high] = tmp; if (i == high && key != tmp) { return NOT_CONTAINED; } return i; } It basically uses a sentinel, which is the searched for value, so that you always find the value and don't have to check for array boundaries. The last element is stored in a temp variable, and then the sentinel is placed at the last position. When the value is found (remember, it is always found due to the sentinel), the original element is restored and the index is checked if it represents the last index and is unequal to the searched for value. If that's the case, -1 (NOT_CONTAINED) is returned, otherwise the index. While I found this implementation really clever, I wonder if it is actually useful. For small arrays, it seems to be always slower, and for large arrays it only seems to be faster when the value is not found. Any ideas?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Entity Framework does not map 2 columns from a SqlQuery calling a stored procedure

    - by user1783530
    I'm using Code First and am trying to call a stored procedure and have it map to one of my classes. I created a stored procedure, BOMComponentChild, that returns details of a Component with information of its hierarchy in PartsPath and MyPath. I have a class for the output of this stored procedure. I'm having an issue where everything except the two columns, PartsPath and MyPath, are being mapped correctly with these two properties ending up as Nothing. I searched around and from my understanding the mapping bypasses any Entity Framework name mapping and uses column name to property name. The names are the same and I'm not sure why it is only these two columns. The last part of the stored procedure is: SELECT t.ParentID ,t.ComponentID ,c.PartNumber ,t.PartsPath ,t.MyPath ,t.Layer ,c.[Description] ,loc.LocationID ,loc.LocationName ,CASE WHEN sup.SupplierID IS NULL THEN 1 ELSE sup.SupplierID END AS SupplierID ,CASE WHEN sup.SupplierName IS NULL THEN 'Scinomix' ELSE sup.SupplierName END AS SupplierName ,c.Active ,c.QA ,c.IsAssembly ,c.IsPurchasable ,c.IsMachined ,t.QtyRequired ,t.TotalQty FROM BuildProducts t INNER JOIN [dbo].[BOMComponent] c ON c.ComponentID = t.ComponentID LEFT JOIN [dbo].[BOMSupplier] bsup ON bsup.ComponentID = t.ComponentID AND bsup.IsDefault = 1 LEFT JOIN [dbo].[LookupSupplier] sup ON sup.SupplierID = bsup.SupplierID LEFT JOIN [dbo].[LookupLocation] loc ON loc.LocationID = c.LocationID WHERE (@IsAssembly IS NULL OR IsAssembly = @IsAssembly) ORDER BY t.MyPath and the class it maps to is: Public Class BOMComponentChild Public Property ParentID As Nullable(Of Integer) Public Property ComponentID As Integer Public Property PartNumber As String Public Property MyPath As String Public Property PartsPath As String Public Property Layer As Integer Public Property Description As String Public Property LocationID As Integer Public Property LocationName As String Public Property SupplierID As Integer Public Property SupplierName As String Public Property Active As Boolean Public Property QA As Boolean Public Property IsAssembly As Boolean Public Property IsPurchasable As Boolean Public Property IsMachined As Boolean Public Property QtyRequired As Integer Public Property TotalQty As Integer Public Property Children As IDictionary(Of String, BOMComponentChild) = New Dictionary(Of String, BOMComponentChild) End Class I am trying to call it like this: Me.Database.SqlQuery(Of BOMComponentChild)("EXEC [BOMComponentChild] @ComponentID, @PathPrefix, @IsAssembly", params).ToList() When I run the stored procedure in management studio, the columns are correct and not null. I just can't figure out why these won't map as they are the important information in the stored procedure. The types for PartsPath and MyPath are varchar(50).

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >