Search Results

Search found 2043 results on 82 pages for 'newly insecure'.

Page 66/82 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • EasyHook Windows Hooking problem/.dll injection

    - by Tom
    Ok can someone try and find the error with this code, it should produce all the registry keys every time something accesses them but i keep getting: System.MissingMethodException: The given method does not exist at EasyHook.LocalHook.GetProcAdress(String InModule, String InChannelName) An example code can be found here: http://www.codeproject.com/KB/DLL/EasyHook64.aspx I can get the CcreateFileW example to work! My code is here: public class Main : EasyHook.IEntryPoint { FileMon.FileMonInterface Interface; LocalHook LocalHook; Stack<String> Queue = new Stack<String>(); public Main(RemoteHooking.IContext InContext,String InChannelName) { // connect to host... Interface = RemoteHooking.IpcConnectClient<FileMon.FileMonInterface>(InChannelName); Interface.Ping(); } public void Run(RemoteHooking.IContext InContext,String InChannelName) { // install hook... try { LocalHook localHook = LocalHook.Create(LocalHook.GetProcAddress("Advapi32.dll", "RegOpenKeyExW"),new DMyRegOpenKeyExW(MyRegOpenKeyExW),this); localHook.ThreadACL.SetExclusiveACL(new int[] { }); } catch (Exception ExtInfo) { Interface.ReportException(ExtInfo); return; } Interface.IsInstalled(RemoteHooking.GetCurrentProcessId()); RemoteHooking.WakeUpProcess(); // wait for host process termination... try { while (true) { Thread.Sleep(500); // transmit newly monitored file accesses... if (Queue.Count > 0) { String[] Package = null; lock (Queue) { Package = Queue.ToArray(); Queue.Clear(); } Interface.OnCreateFile(RemoteHooking.GetCurrentProcessId(), Package); } else Interface.Ping(); } } catch { // Ping() will raise an exception if host is unreachable } } [DllImport("Advapi32.dll", CharSet = CharSet.Unicode, SetLastError = true, CallingConvention = CallingConvention.StdCall)] static extern int RegOpenKeyExW(UIntPtr hKey,string subKey,int ulOptions,int samDesired,out UIntPtr hkResult); [UnmanagedFunctionPointer(CallingConvention.StdCall, CharSet = CharSet.Unicode, SetLastError = true)] delegate int DMyRegOpenKeyExW(UIntPtr hKey,string subKey,int ulOptions,int samDesired,out UIntPtr hkResult); int MyRegOpenKeyExW(UIntPtr hKey,string subKey,int ulOptions,int samDesired,out UIntPtr hkResult) { Console.WriteLine(string.Format("Accessing: {0}", subKey)); return RegOpenKeyExW(hKey, subKey, ulOptions, samDesired, out hkResult); } }

    Read the article

  • Need help with Layouts in Swing

    - by Malki
    Hi, I'm using Java Swing and I have the following problem: I have a class TnaiPanel that extends JPanel. In this class I am creating 3 components and then lay them out in a horizontal line using a BoxLayout. Also, I have a class TnaimDinamimPanel that also extends JPanel. This class contains multiple occurances of TnaiPanel, layed out vertically using a BoxLayout. Also, I have a class MainFrame that extends JFrame. This frame contains a menu-bar and one main panel. The main panel can change (when choosing a certain menu-item, I create a new panel and set it to show as the main panel of the frame). Now, for some reason I get "BoxLayout can't be shared" when I add the newly created TnaimDinamimPanel to the components of the frame. I don't mind using different layout objects. The result I want to get is a sort of "table" of components, where each TnaiPanel will have fixed component sizes and spacing, essentially serving the role os a "row" in the "table". Thanks, Malki.

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • How do I improve this design for dealing with intersecting date ranges and resolving date range conf

    - by derdo
    I am working on some code that deals with date ranges. I have pricing activities that have a starting-date and an end-date to set a certain price for that range. There are multiple pricing activities with intersecting date ranges. What I ultimately need is the ability to query valid prices for a date range. I pass in (jan1,jan31) and I get back a list that says jan1--jan10 $4 , jan11--jan19 $3 jan20--jan31 $4. There are priorities between pricing activities. Some type of pricing activities have high priority, so they override other pricing activities and for certain type of pricing activities lowest price wins etc. I currently have a class that holds these pricing activities and keeps a resolved pricing calendar. As I add new pricing activities I update the resolved calendar. As I write more tests/code, it started to get very complicated with all the different cases (pricing activities intersecting in different ways). I am ending up with very complicated code where I am resolving a newly added pricing activity. See AddPricingActivity() method below. Can anybody think of a simpler way to deal with this? Could there be similar code somewhere out there? Public class PricingActivity() { DateTime startDate; DateTime endDate; Double price; Public bool StartsBeforeEndsAfter (PricingActivity pAct) { // pAct covers this pricing activity } Public bool StartsMiddleEndsAfter (PricingActivity pAct){ // early part of pAct Itersects with later part of this pricing activity } //more similar methods to cover all the combinations of intersecting } Public Class PricingActivityList() { List<PricingActivity> activities; SortedDictionary<Date, PricingActivity> resolvedPricingCalendar; Public void AddPricingActivity(PricingActivity pAct) { //update the resolvedCalendar //go over each activity and find intersecting ones //update the resolved calendar correctly //depending on the type of the intersection //this part is getting out of hand….. } }

    Read the article

  • Twitter O-Auth Callback url

    - by jtymann
    I am having a problem with Twitter's oauth authentication and using a callback url. I am coding in php and using the sample code referenced by the twitter wiki, http://github.com/abraham/twitteroauth I got that code, and tried a simple test and it worked nicely. However I want to programatically specify the callback url, and the example did not support that. So I quickly modified the getRequestToken() method to take in a parameter and now it looks like this: function getRequestToken($params = array()) { $r = $this->oAuthRequest($this->requestTokenURL(), $params); $token = $this->oAuthParseResponse($r); $this->token = new OAuthConsumer($token['oauth_token'], $token['oauth_token_secret']); return $token; } and my call looks like this $tok = $to->getRequestToken(array('oauth_callback' => 'http://127.0.0.1/twitter_prompt/index.php')); This is the only change I made, and the redirect works like a charm, however I am getting an error when I then try and use my newly granted access to try and make a call. I get a "Could not authenticate you" error. Also the application never actually gets added to the users authorized connections. Now I read the specs and I thought all I had to do was specify the parameter when getting the request token. Could someone a little more seasoned in oauth and twitter possibly give me a hand? Thank You

    Read the article

  • WriteableBitmap failing badly, pixel array very inaccurate

    - by dawmail333
    I have tried, literally for hours, and I have not been able to budge this problem. I have a UserControl, that is 800x369, and it contains, simply, a path that forms a worldmap. I put this on a landscape page, then I render it into a WriteableBitmap. I then run a conversion to turn the 1d Pixels array into a 2d array of integers. Then, to check the conversion, I wire up the custom control's click command to use the Point.X and Point.Y relative to the custom control in the newly created array. My logic is thus: wb = new WriteableBitmap(worldMap, new TranslateTransform()); wb.Invalidate(); intTest = wb.Pixels.To2DArray(wb.PixelWidth); My conversion logic is as such: public static int[,] To2DArray(this int[] arr,int rowLength) { int[,] output = new int[rowLength, arr.Length / rowLength]; if (arr.Length % rowLength != 0) throw new IndexOutOfRangeException(); for (int i = 0; i < arr.Length; i++) { output[i % rowLength, i / rowLength] = arr[i]; } return output; } Now, when I do the checking, I get completely and utterly strange results: apparently all pixels are either at values of -1 or 0, and these values are completely independent of the original colours. Just for posterity: here's my checking code: private void Check(object sender, MouseButtonEventArgs e) { Point click = e.GetPosition(worldMap); ChangeNotification(intTest[(int)click.X,(int)click.Y].ToString()); } The result show absolutely no correlation to the path that the WriteableBitmap has rendered into it. The path has a fill of solid white. What the heck is going on? I've tried for hours with no luck. Please, this is the major problem stopping me from submitting my first WP7 app. Any guidance?

    Read the article

  • Internationalizing a Python 2.6 application via Babel

    - by Malcolm
    We're evaluating Babel 0.9.5 [1] under Windows for use with Python 2.6 and have the following questions that we we've been unable to answer through reading the documentation or googling. 1) I would like to use an _ like abbreviation for ungettext. Is there a concencus on whether one should use n_ or N_ for this? n_ does not appear to work. Babel does not extract text. N_ appears to partially work. Babel extracts text like it does for gettext, but does not format for ngettext (missing plural argument and msgstr[ n ].) 2) Is there a way to set the initial msgstr fields like the following when creating a POT file? I suspect there may be a way to do this via Babel cfg files, but I've been unable to find documentation on the Babel cfg file format. "Project-Id-Version: PROJECT VERSION\n" "Language-Team: en_US \n" 3) Is there a way to preserve 'obsolete' msgid/msgstr's in our PO files? When I use the Babel update command, newly created obsolete strings are marked with #~ prefixes, but existing obsolete message strings get deleted. Thanks, Malcolm [1] http://babel.edgewall.org/

    Read the article

  • Attaching event on the fly with prototype

    - by andreas
    Hi, I've been scratching my head over this for an hour... I have a list of todo which has done button on each item. Every time a user clicks on that, it's supposed to do an Ajax call to the server, calling the marking function then updates the whole list. The problem I'm facing is that the new list has done buttons as well on each item. And it doesn't attach the click event anymore after first update. How do you attach an event handler on newly updated element with prototype? Note: I've passed evalScripts: true to the updater wrapper initial event handler attach <script type="text/javascript"> document.observe('dom:loaded', function() { $$('.todo-done a').each(function(a) { $(a).observe('click', function(e) { var todoId = $(this).readAttribute('href').substr(1); new Ajax.Updater('todo-list', $.utils.getPath('/todos/ajax_mark/done'), { onComplete: function() { $.utils.updateList({evalScripts: true}); } }); }); }) }); </script> updateList: <script type="text/javascript"> var Utils = Class.create({ updateList: function(options) { if(typeof options == 'undefined') { options = {}; } new Ajax.Updater('todo-list', $.utils.getPath('/todos/ajax_get'), $H(options).merge({ onComplete: function() { $first = $('todo-list').down(); $first.hide(); Effect.Appear($first, {duration: 0.7}); new Effect.Pulsate($('todo-list').down(), {pulses: 1, duration: 0.4, queue: 'end'}); } }) ); } }) $.utils = new Utils('globals'); </script> any help is very much appreciated, thank you :)

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • how to Compute the average probe length for success and failure - Linear probe (Hash Tables)

    - by fang_dejavu
    hi everyone, I'm doing an assignment for my Data Structures class. we were asked to to study linear probing with load factors of .1, .2 , .3, ...., and .9. The formula for testing is: The average probe length using linear probing is roughly Success-- ( 1 + 1/(1-L)**2)/2 or Failure-- (1+1(1-L))/2. we are required to find the theoretical using the formula above which I did(just plug the load factor in the formula), then we have to calculate the empirical (which I not quite sure how to do). here is the rest of the requirements **For each load factor, 10,000 randomly generated positive ints between 1 and 50000 (inclusive) will be inserted into a table of the "right" size, where "right" is strictly based upon the load factor you are testing. Repeats are allowed. Be sure that your formula for randomly generated ints is correct. There is a class called Random in java.util. USE it! After a table of the right (based upon L) size is loaded with 10,000 ints, do 100 searches of newly generated random ints from the range of 1 to 50000. Compute the average probe length for each of the two formulas and indicate the denominators used in each calculationSo, for example, each test for a .5 load would have a table of size approximately 20,000 (adjusted to be prime) and similarly each test for a .9 load would have a table of approximate size 10,000/.9 (again adjusted to be prime). The program should run displaying the various load factors tested, the average probe for each search (the two denominators used to compute the averages will add to 100), and the theoretical answers using the formula above. .** how do I calculate the empirical success?

    Read the article

  • URL shortening: using inode as short name?

    - by Licky Lindsay
    The site I am working on wants to generate its own shortened URLs rather than rely on a third party like tinyurl or bit.ly. Obviously I could keep a running count new URLs as they are added to the site and use that to generate the short URLs. But I am trying to avoid that if possible since it seems like a lot of work just to make this one thing work. As the things that need short URLs are all real physical files on the webserver my current solution is to use their inode numbers as those are already generated for me ready to use and guaranteed to be unique. function short_name($file) { $ino = @fileinode($file); $s = base_convert($ino, 10, 36); return $s; } This seems to work. Question is, what can I do to make the short URL even shorter? On the system where this is being used, the inodes for newly added files are in a range that makes the function above return a string 7 characters long. Can I safely throw away some (half?) of the bits of the inode? And if so, should it be the high bits or the low bits? I thought of using the crc32 of the filename, but that actually makes my short names longer than using the inode. Would something like this have any risk of collisions? I've been able to get down to single digits by picking the right value of "$referencefile". function short_name($file) { $ino = @fileinode($file); // arbitrarily selected pre-existing file, // as all newer files will have higher inodes $ino = $ino - @fileinode($referencefile); $s = base_convert($ino, 10, 36); return $s; }

    Read the article

  • How do I install the OpenSSL C++ library on Ubuntu?

    - by Daryl Spitzer
    I'm trying to build some code on Ubuntu 10.04 LTS that uses OpenSSL 1.0.0. When I run make, it invokes g++ with the "-lssl" option. The source includes: #include <openssl/bio.h> #include <openssl/buffer.h> #include <openssl/des.h> #include <openssl/evp.h> #include <openssl/pem.h> #include <openssl/rsa.h> I ran: $ sudo apt-get install openssl Reading package lists... Done Building dependency tree Reading state information... Done openssl is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. But I guess the openssl package doesn't include the library. I get these errors on make: foo.cpp:21:25: error: openssl/bio.h: No such file or directory foo.cpp:22:28: error: openssl/buffer.h: No such file or directory foo.cpp:23:25: error: openssl/des.h: No such file or directory foo.cpp:24:25: error: openssl/evp.h: No such file or directory foo.cpp:25:25: error: openssl/pem.h: No such file or directory foo.cpp:26:25: error: openssl/rsa.h: No such file or directory How do I install the OpenSSL C++ library on Ubuntu 10.04 LTS? I did a man g++ and (under "Options for Linking") for the -l option it states: " The linker searches a standard list of directories for the library..." and "The directories searched include several standard system directories..." What are those standard system directories?

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • When I zip up my demo FlashDevelop project..why does it break?

    - by Ryan
    I built an AS3 image gallery using FlashDevelop. Before I zip up the application, I can run the image gallery in my browser by simply opening the index.html for the project. Everything works perfectly. I then zip up the project as proj-0.1.2.zip using winrar. I then unzip this newly created zip and try to load the application using the project index.html like above. The gallery doesn't function properly. From seeing what happens, it appears as though the image metadata is not present(but I'm not sure, see below). There are other applications as well that are broken. Videos don't load. If an application doesn't depend on any external assets then everything looks fine. Another thing..If I then build the FlashDevelop project and republish the swf..then it works in the index.html like I want. What is going on here? I want people to be able to fire up my demo apps out of the box by just running the index.html. If that doesn't always work and they have to figure out that they need to rebuild the SWF then that's pretty bad.

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Growing user control not updating

    - by user328259
    I am developing in C# and .Net 2.0. I have a user control that draws cells (columnar) depending upon the maximum number of cells. There are some drawing routines that generate the necessary cells. There is a property NumberOfCells that adjust the height of this control; CELLHEIGHT_CONSTANT * NumberOfCells. The OnPaint() method is overridden (code that draws the Number of cells). There is another user control that contains a panel which contains the userControl1 from above. There is a property NumberCells that changes userControl1's NumberOfCells. UserControl2 is then placed on a windows form. On that form there is a NumericUpDown control (only increments from 1). When the user increments by 1, I adjust the VerticalScroll.Maximum by 1 as well. Everything works well and good BUT when I increment once, the panel updates fine (inserts a vertical scrolll when necessary) but cells are not being added! I've tried Invalidating on userControl2 AND on the form but nothing seems to draw the newly added cells. Any assistance is appreciated. Thank you in advance. Lawrence

    Read the article

  • Python unittest (using SQLAlchemy) does not write/update database?

    - by Jerry
    Hi, I am puzzled at why my Python unittest runs perfectly fine without actually updating the database. I can even see the SQL statements from SQLAlchemy and step through the newly created user object's email -- ...INFO sqlalchemy.engine.base.Engine.0x...954c INSERT INTO users (user_id, user_name, email, ...) VALUES (%(user_id)s, %(user_name)s, %(email)s, ...) ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_id': u'4cfdafe3f46544e1b4ad0c7fccdbe24a', 'email': u'[email protected]', ...} > .../tests/unit_tests/test_signup.py(127)test_signup_success() -> user = user_q.filter_by(user_name='test').first() (Pdb) n ...INFO sqlalchemy.engine.base.Engine.0x...954c SELECT users.user_id AS users_user_id, ... FROM users WHERE users.user_name = %(user_name_1)s LIMIT 1 OFFSET 0 ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_name_1': 'test'} > .../tests/unit_tests/test_signup.py(128)test_signup_success() -> self.assertTrue(isinstance(user, model.User)) (Pdb) user <pweb.models.User object at 0x9c95b0c> (Pdb) user.email u'[email protected]' Yet at the same time when I login to the test database, I do not see the new record there. Is it some feature from Python/unittest/SQLAlchemy/Pyramid/PostgreSQL that I'm totally unaware of? Thanks. Jerry

    Read the article

  • In .NET, how do I prevent, or handle, tampering with form data of disabled fields before submission?

    - by David
    Hi, If a disabled drop-down list is dynamically rendered to the page, it is still possible to use Firebug, or another tool, to tamper with the submitted value, and to remove the "disabled" HTML attribute. This code: protected override void OnLoad(EventArgs e) { var ddlTest = new DropDownList() {ID="ddlTest", Enabled = false}; ddlTest.Items.AddRange(new [] { new ListItem("Please select", ""), new ListItem("test 1", "1"), new ListItem("test 2", "2") }); Controls.Add(ddlTest); } results in this HTML being rendered: <select disabled="disabled" id="Properties_ddlTest" name="Properties$ddlTest"> <option value="" selected="selected">Please select</option> <option value="1">test 1</option> <option value="2">test 2</option> </select> The problem occurs when I use Firebug to remove the "disabled" attribute, and to change the selected option. On submission of the form, and re-creation of the field, the newly generated control has the correct value by the end of OnLoad, but by OnPreRender, it has assumed the identity of the submitted control and has been given the submitted form value. .NET seems to have no way of detecting the fact that the field was originally created in a disabled state and that the submitted value was faked. This is understandable, as there could be legitimate, client-side functionality that would allow the disabled attribute to be removed. Is there some way, other than a brute force approach, of detecting that this field's value should not have been changed? I see the brute force approach as being something crap, like saving the correct value somewhere while still in OnLoad, and restoring the value in the OnPreRender. As some fields have dependencies on others, that would be unacceptable to me.

    Read the article

  • JQuery UI drag and drop doesn't notice DOM changes happening during the dragging ?

    - by user315677
    Hi everybody, I have a html table in which each line (<tr>) represents a group of client computers. Each line can be expanded to show the clients belonging to the group. The <tr> containing the clients are always generated but hidden and showed when clicking the plus button at the beginning of each line. The clients themselves (they are <div>) can be dragged and dropped in another group as long as this group is already expanded. So far it works fine. What I am trying to achieve now is that the client can be dragged to a collapsed group and after a second or so hovering the group it will be expanded and the client can be dropped amongst the other clients of the group. I programmed the hover using the in and outevents of the droppable and it expands the group all right, but (and now it starts to be hard to explain with words ;) the behavior of the droppable (the client) is still as if the table line that appeared were not there : the in out and drop events of the droppable are fired on the old position of the elements, and the in out and drop events of the newly appeared <tr> are never fired. It looks as if JQuery memorizes the position, size etc. of the elements when the drag starts, and these values are not updated if there is a change in the DOM before the drop happens... Can someone confirm this behavior is normal, or can it be caused by another problem in my own code ? Any workaround ? (the question is quite bloated already so I don't include any code but I'll gladly upload it if required)

    Read the article

  • Weird camera Intent behavior

    - by David Erosa
    Hi all. I'm invoking the MediaStore.ACTION_IMAGE_CAPTURE intent with the MediaStore.EXTRA_OUTPUT extra so that it does save the image to that file. On the onActivityResult I can check that the image is being saved in the intended file, which is correct. The weird thing is that anyhow, the image is also saved in a file named something like "/sdcard/Pictures/Camera/1298041488657.jpg" (epoch time in which the image was taken). I've checked the Camera app source (froyo-release branch) and I'm almost sure that the code path is correct and wouldn't have to save the image, but I'm a noob and I'm not completly sure. AFAIK, the image saving process starts with this callback (comments are mine): private final class JpegPictureCallback implements PictureCallback { ... public void onPictureTaken(...){ ... // This is where the image is passed back to the invoking activity. mImageCapture.storeImage(jpegData, camera, mLocation); ... public void storeImage(final byte[] data, android.hardware.Camera camera, Location loc) { if (!mIsImageCaptureIntent) { // Am i an intent? int degree = storeImage(data, loc); // THIS SHOULD NOT BE CALLED WITHIN THE CAPTURE INTENT!! ....... // An finally: private int storeImage(byte[] data, Location loc) { try { long dateTaken = System.currentTimeMillis(); String title = createName(dateTaken); String filename = title + ".jpg"; // Eureka, timestamp filename! ... So, I'm receiving the correct data, but it's also being saved in the "storeImage(data, loc);" method call, which should not be called... It'd not be a problem if I could get the newly created filename from the intent result data, but I can't. When I found this out, I found about 20 image files from my tests that I didn't know were on my sdcard :) I'm getting this behavior both with my Nexus One with Froyo and my Huawei U8110 with Eclair. Could please anyone enlight me? Thanks a lot.

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • Impersonate SYSTEM (or equivalent) from Administrator Account

    - by KevenK
    This question is a follow up and continuation of this question about a Privilege problem I'm dealing with currently. Problem Summary: I'm running a program under a Domain Administrator account that does not have Debug programs (SeDebugPrivilege) privilege, but I need it on the local machine. Klugey Solution: The program can install itself as a service on the local machine, and start the service. Said service now runs under the SYSTEM account, which enables us to use our SeTCBPrivilege privilege to create a new access token which does have SeDebugPrivilege. We can then use the newly created token to re-launch the initial program with the elevated rights. I personally do not like this solution. I feel it should be possible to acquire the necessary privileges as an Administrator without having to make system modifications such as installing a service (even if it is only temporary). I am hoping that there is a solution that minimizes system modifications and can preferably be done on the fly (ie: Not require restarting itself). I have unsuccessfully tried to LogonUser as SYSTEM and tried to OpenProcessToken on a known SYSTEM process (such as csrss.exe) (which fails, because you cannot OpenProcess with PROCESS_TOKEN_QUERY to get a handle to the process without the privileges I'm trying to acquire). I'm just at my wit's end trying to come up with an alternative solution to this problem. I was hoping there was an easy way to grab a privileged token on the host machine and impersonate it for this program, but I haven't found a way. If anyone knows of a way around this, or even has suggestions on things that might work, please let me know. I really appreciate the help, thanks!

    Read the article

  • jQuery: how to create a new div, assign some attributes, and create an id from some other elements a

    - by Ronedog
    jQuery: how to create a new div, assign some attributes, and create an id from some other elements attributes? How can I accomplish this. What I've tried below is not working and I'm unsure how to make it work. What I'm trying to do is create a div and assign it some attributes, then append this div to all the < li elements that are the on the last node of my unordered list. And lastly, but most important I want to retrieve the value of the attribute called "p_node" from the < li that is being appended to and insert it in as part of the ID of the newly created div. Here's my current jQuery code: $('<div />,{id:"'+ $("#nav li:not(:has(li))").attr("p_node").val() +'_p_cont_div", class:"property_position"}').appendTo("#nav li:not(:has(li))"); Here's the HTML before the div creation: <ul> <li p_node="98" class="page_name">Category A</li> </ul> <ul> <li p_node="99" class="page_name">Category B</li> </ul> Here's what I want it to look like after the new div creation: <ul> <li p_node="98" class="page_name">Category A</li> <div id="98_p_cont_div" class="property_position"></div> </ul> <ul> <li p_node="99" class="page_name">Category B</li> <div id="99_p_cont_div" class="property_position"></div> </ul>

    Read the article

  • Bytecode and Objects

    - by HH
    Hey everyone, I am working on a bytecode instrumentation project. Currently when handling objects, the verifier throws an error most of the time. So I would like to get things clear concerning rules with objects (I read the JVMS but couldn't find the answer I was looking for): I am instrumenting the NEW instruction: original bytecode NEW <MyClass> DUP INVOKESPECIAL <MyClass.<init>> after instrumentation NEW <MyClass> DUP INVOKESTATIC <Profiler.handleNEW> DUP INVOKESPECIAL <MyClass.<init>> Note that I added a call to Profiler.handleNEW() which takes as argument an object reference (the newly created object). The piece of code above throws a VerificationError. While if I don't add the INVOKESTATIC (leaving only the DUP), it doesn't. So what is the rule that I'm violating? I can duplicate an uninitialized reference but I can't pass it as parameter? I would appreciate any help. Thank you

    Read the article

  • SVN 409 conflict on commits and updates

    - by bhefny
    We have been using SVN for the past year now and when we migrated to an online server we started getting this error: Commit: Commit failed (details follow): File or directory 'x.php' is out of date; try updating resource out of date; try updating CHECKOUT of '/!svn/ver/491/x.php': 409 Conflict (http://svn.example.com) We are currently using SmartSVN 6.5 and we have also tested with RapidSVN & Syncro (but we can't use tortoise as we have a lot of Ubunutu users) at the begining I though this How do you fix an SVN 409 Conflict Error would help, but it didn't we are still facing the same error and it's even more absurd now. the main problem is that after you get the error, you can't shake it of. Updating doesn't solve, reverting doesn't solve. You are just stuck with the error. The only thing that could work is removing the file from SVN and adding your version but that would be against why we are using SVN in the first place This is our apache config (and yes autoversioning is ON) <Location /> DAV svn SVNPath /home/example/svn SVNAutoversioning on AuthType Basic AuthName "Access Restricted" AuthUserFile /home/example/svn-auth-file Require valid-user </Location> <Directory /> <Files ~ "^\.ht"> Order allow,deny Allow from all Satisfy All </Files> <Files ~ "^error_log"> Order allow,deny Allow from all Satisfy All </Files> </Directory> And here are some observation: We don't receive conflicts anymore, we just get this 409 conflict you can somehow avoid the error if you always update before committing When committing a modified file + a newly added file, you get the error. As if the added file incremented the version by one and then you are committing another file with a older version. Please advise, we are about to go insane

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >