Search Results

Search found 26523 results on 1061 pages for 'jack back'.

Page 150/1061 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • SharePoint For Newbie Developers: Code Scope

    - by Mark Rackley
    So, I continue to try to come up with diagrams and information to help new SharePoint developers wrap their heads around this SharePoint beast, especially when those newer to development are on my team. To that end, I drew up the below diagram to help some of our junior devs understand where/when code is being executed in SharePoint at a high level. Note that I say “High Level”… This is a simplistic diagram that can get a LOT more complicated if you want to dive in deeper.  For the purposes of my lesson it served its purpose well. So, please no comments from you peanut gallery about information 3 levels down that’s missing unless it adds to the discussion.  Thanks So, the diagram below details where code is executed on a page load and gives the basic flow of the page load. There are actually many more steps, but again, we are staying high level here. I just know someone is still going to say something like “Well.. actually… the dlls are getting executed when…”  Anyway, here’s the diagram with some information I like to point out: Code Scope / Where it is executed So, looking at the diagram we see that dlls and XSL are executed on the server and that JavaScript/jQuery are executed on the client. This is the main thing I like to point out for the following reasons: XSL (for the most part) is faster than JavaScript I actually get this question a lot. Since XSL is executed on the server less data is getting passed over the wire and a beefier machine (hopefully) is doing the processing. The outcome of course is better performance. When You are using jQuery and making Web Service calls you are building XML strings and sending them to the server, then ALL the results come back and the client machine has to parse through the XML and use what it needs and ignore the rest (and there is a lot of garbage that comes back from SharePoint Web Service calls). XSL and JavaScript cannot work together in the same scope Let me clarify. JavaScript can send data back to SharePoint in postbacks that XSL can then use. XSL can output JavaScript and initiate JavaScript variables.  However, XSL cannot call a JavaScript method to get a value and JavaScript cannot directly interact with XSL and call its templates. They are executed in there scope only. No crossing of boundaries here. So, what does this all mean? Well, nothing too deep. This is just some basic fundamental information that all SharePoint devs need to understand. It will help you determine what is the best solution for your specific development situation and it will help the new guys understand why they get an error when trying to call a JavaScript Function from within XSL.  Let me know if you think quick little blogs like this are helpful or just add to the noise. I could probably put together several more that are similar.  As always, thanks for stopping by, hope you learned something new.

    Read the article

  • Make your TSQL easier to read during a presentation

    - by Jonathan Allen
    SQL Server Management Studio 2012 has some neat settings that you can use to help your presentations at a SQL event better for the attendees if you are willing to spend a few minutes making some settings changes. Historically, I have been reluctant to make changes to my SSMS settings as it is such a tedious process and it’s not 100% clear that what you think you are changing is actually what gets changed. With SSMS 2012 this has become a lot easier and a lot less risky. In any session that involves TSQL there is a trade off between the speaker having all the code on screen and the attendees being able to read any of what is on screen. You (the speaker) might be able to read this when you are working on the code but plenty of your audience wont be able to make head or tail of it. SSMS 2012 has a zoom facility that can help: but don’t go nuts … Having the font too big means you will be scrolling a lot and the code will again be rendered unreadable. There is more though but you need to take a deep breath and open the Tools menu and delve into the SSMS options. In previous versions of SSMS this is a deep, dark and scary place where changing values can be obscure and sometimes catastrophic to the UI when you get back to the code editor. First things first, we set out as a good DBA and save our current (and presumably acceptable) SSMS configuration. From the import and Export Settings you can set up a file to hold all of the settings that you currently have. The wizard will open and ask you to pick an option. This time around choose to export settings. hit next and next again and then name your settings profile in the final step of the wizard and then click Finish. Once this is done then you can change whatever you like and always get back to this configuration in a couple of clicks. So what can you change to make for a good experience? Well there are plenty of things that can be altered but don’t go too mad and change too many things without taking a look at the results for every item on the list above you can change font, size, weight, colour, background colour etc. etc. but consider what you are trying to achieve and take it slowly. I have seen presenters with their settings set to have a yellow highlight and black font rather than the default pale blue background and slightly darker font so to achieve that select Text Editor and then select “Selected Text” in the Display Items listbox. As you change things the Sample area give you an idea of what effect you are going to have. Black and yellow is the colour combination with the highest contrast – that’s why bees and wasps# are that colour. What next? how about increasing the default font for your demo scripts? This means that any script you open and any new ones that you start will take on this font. No more zooming (or forgetting to) in the middle of sessions. now don’t forget to save this profile – follow the same steps as above but give the profile a different name, something like PresentationBigFontHighContrast might be appropriate. Once you are done making changes, export the settings once more and then go into the Import Export wizard and import settings from the first profile you created. Everything will be back to normal. Now making changes to suit your environment can be done very easily and with confidence. * – and warning tape and safety signs and so forth – Health and Safety officers simply copy nature!

    Read the article

  • Customer owes me half my payment. Should I take ownership of his AWS account for charging? How?

    - by Cawas
    Background They paid me my first half (back in April 15th) before even we could get into an agreement. Very nice of him! Then I've finished the 2 weeks job of setting up the servers, using his AWS credentials he had just bought. I waited for another 2 weeks for everything settling up, and it was all running fine. He did what he needed with his sftp account, everyone were happy. Now, it has been almost 2 months since I've finished the job and I still didn't get the 2nd half. I must assume, it's not much money (about U$400, converted), but it would help me pay the bills at least. Heck, the Amazon bills they are paying are little less than that (for now). Measures I'm wondering how I can go to charge him now. First thought, of course, would be taking everything down and say "pay now, or be doomed". If that's not good enough, then I lost it. I have no contracts and I doubt I could get a law suit in this country for such a low value based only on emails. And I don't really want to get too agressive here - there might be a business chance in the future and I don't want to ruin it. Second though would be just changing the password. But then he probably could gain access again by some recovery means. That's where my question may mainly relay. How can I do it and not leaving any room for recovery from his side? I even got the first AWS "your account was created" mail from himself, showing me I could begin my job, back then. Lastly, do you have any other idea on what I can and what I should do in this case? Responding to Answers Please, consider reading the current answers and comments. This is not a very simple case. I've considered many, many options (including all lawful ones) before considering this ones I've listed here, and I am willing to take the loss and all that. That's not the point. The point is being practical here. I will call him again and talk about it. I will do terrorism on getting lawyers and getting contract. I am ready to go all forth while I have time and energy for it. But, in practice, there is this extra thing I can do to assure myself of the work I've done. I can basically take it back and delete everything! I'd only take his password because I can find no other way to do it within Amazon. Maybe, contacting Amazon and explaining the situation? I don't know. Give me ideas on this technical side! And thank everyone for the attention and helping me clarifying the issue so far! :)

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • Gnome-shell fails to load on 12.10

    - by Githlar
    I'm usually the one answering questions, but in this I'm throughly stumped! My Setup: Ubuntu 12.10 (Dist upgrade form 12.04) ATI M96 [Mobility Radeon HD 4650] Upon the first installation of 12.10 I had all kinds of issues getting the Legacy ATI drivers to install (I guess the source for the drivers isn't kosher with kernel 3.5). So, I added the repository ppa:makson96/fglrx - which has a version of the ATI source patched to work with kernel 3.5. After installation of fglrx-legacy from that PPA, gnome-shell and all my graphics work fine... until today. The Problem I unsuspended my computer today and the screen was black (not off, the black from the gnome lock screen). I'd move my mouse/hit a key and the background would flash and then it'd go back to black. Restarted via VT1 Logged into Gnome (gnome-shell) session, but no gnome-shell! Investigation: First, I went to VT1 and tried export DISPLAY=:0;gnome-shell --replace. It appeared to work fine, switch back to X and nothing. Went back to VT1 and saw this error message: JS ERROR: !!! Exception was: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO JS ERROR: !!! message = '"Object 0x7fc748129c30 is not a subclass of (null), it's a xO"' JS ERROR: !!! fileName = '"/usr/share/gnome-shell/js/ui/tweener.js"' JS ERROR: !!! lineNumber = '218' JS ERROR: !!! stack = '"()@/usr/share/gnome-shell/js/ui/tweener.js:218 wrapper()@/usr/share/gjs-1.0/lang.js:204 ()@/usr/share/gjs-1.0/lang.js:145 ()@/usr/share/gjs-1.0/lang.js:239 init()@/usr/share/gnome-shell/js/ui/tweener.js:49 init()@/usr/share/gnome-shell/js/ui/environment.js:96 @<main>:1 "' Window manager warning: Log level 32: Execution of main.js threw exception: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO Note: Everywhere it says "it's a xO", xO is actually garbled and changes every time (I'm thinking memory corruption?) This error is thrown by line 96 of /usr/share/gnome-shell/js/ui/environment.js: tweener.Init() Did a purge of fglrx-legacy, reboot, reinstall fglrx-legacy, reboot... same thing. Did a ppa-purge of ppa:gnome3-team/gnome3, and reinstalled gnome-shell and ubuntu-desktop from the standard repositores... same thing. I'm really at a loss here. I love gnome-shell and after using it for nearly a year now gnome classic just seems so archaic. Additional Information Apt log from the day I first suspended my machine (these are upgrades from the gnome3-team/gnome3 ppa and ubuntu-wine/ppa ppa): Start-Date: 2012-11-24 17:30:28 Commandline: aptdaemon role='role-commit-packages' sender=':1.618' Install: gkbd-capplet:amd64 (3.6.0-0ubuntu1), gnome-control-center-unity:amd64 (1.0-0ubuntu1~ubuntu12.10.1) Upgrade: nautilus:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), libgnome-control-center1:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), wine1.5-i386:i386 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), wine1.5:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), gnome-settings-daemon:amd64 (3.4.2-0ubuntu14, 3.6.3-0ubuntu1~ubuntu12.10.1), gnome-control-center-data:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), gnome-accessibility-themes:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), gnome-themes-standard:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), wine1.5-amd64:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), nautilus-data:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), gnome-control-center:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), libnautilus-extension1a:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1) End-Date: 2012-11-24 17:31:32 fglrxinfo (driver appears to be working): display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 4650 OpenGL version string: 3.3.11653 Compatibility Profile Context Does anybody have any further ideas?

    Read the article

  • SharePoint Content and Site Editing Tips

    - by Bil Simser
    A few content management and site editing tips for power users on this bacon flavoured unicorn morning. The theme here is keep it clean!Write "friendly" email addressesRemember it's human beings reading your content. So seeing something like "If you have questions please send an email to [email protected]" breaks up the readiblity. Instead just do the simple steps of writing the content in plain English and going back, highlighting the name and insert a link (note: you might have to prefix the link with mailto:[email protected]). It makes for a friendlier looking page and hides the ugliness that are sometimes in email addresses.Use friendly column and list namesThis is a big pet peeve of mine. When you first create a column or list with spaces the internal name is changed. The display name might be "My Amazing List of Animals with Large Testicles" but the internal (and link) name becomes "My_x00x20_Amazing_x00x20_List_x00x20_of_x00x20_Animals_x00x20_with_x00x20_Large_x00x20_Testicles". What's worse is if you create a publishing page named "This Website is Fueled By a Dolphin's Spleen". Not only is it incorrect grammar, but the apostrophe wreaks havoc on both the internal name for the list (with lots of crazy hex codes) as well as the hyperlink (where everything is uuencoded). Instead create the list with a distinct and compact name then go back and change it to whatever you want. The end result is a better formed name that you can both script and access in code easier.Keep your Views CleanWhen you add a column to a list or create a new list the default is to add it to the default view. Do everyone a favour and don't check this box! The default view of a list should be something similar to the Title field and nothing else. Keep it clean. If you want to set a defalt view that's different, go back and create one with all the fields and filtering and sorting columns you want and set it as default. It's a good idea to keep the original AllItems.aspx (note the lack of space in the filename!) easy and unfiltered. It's also a good idea to keep your column count down in views. Don't let every column be added by default and don't add every column just because you can. Create separate views for distinct responsibilities and try to keep the number of columns down to a single screen to prevent horizontal scrolling.Simple NavigationThe Quick Launch is a great tool for navigating around your site but don't use the default of adding all lists to it. Uncheck that box and keep navigation simple. Create custom groupings that make sense so if you don't have a site with "Documents and Lists" but "Reports and Notices" makes more sense then do it. Also hide internal lists from the Quick Launch. For example, if most users don't need to see all the lookup tables you might have on a site don't show them. You can use audience filtering on the Quick Launch if you want to hide admin items from non-admin users so consider that as an option.Enjoy!

    Read the article

  • Simple Navigation In Windows Phone 7

    - by PeterTweed
    Take the Slalom Challenge at www.slalomchallenge.com! When moving to the mobile platform all applications need to be able to provide different views.  Navigating around views in Windows Phone 7 is a very easy thing to do.  This post will introduce you to the simplest technique for navigation in Windows Phone 7 apps. Steps: 1.     Create a new Windows Phone Application project. 2.     In the MainPage.xaml file copy the following xaml into the ContentGrid Grid:             <StackPanel Orientation="Vertical" VerticalAlignment="Center"  >                 <TextBox Name="ValueTextBox" Width="200" ></TextBox>                 <Button Width="200" Height="30" Content="Next Page" Click="Button_Click"></Button>             </StackPanel> This gives a text box for the user to enter text and a button to navigate to the next page. 3.     Copy the following event handler code to the MainPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {             NavigationService.Navigate(new Uri( string.Format("/SecondPage.xaml?val={0}", ValueTextBox.Text), UriKind.Relative));         }   The event handler uses the NavigationService.Navigate() function.  This is what makes the navigation to another page happen.  The function takes a Uri parameter with the name of the page to navigate to and the indication that it is a relative Uri to the current page.  Note also the querystring is formatted with the value entered in the ValueTextBox control – in a similar manner to a standard web querystring. 4.     Add a new Windows Phone Portrait Page to the project named SecondPage.xaml. 5.     Paste the following XAML in the ContentGrid Grid in SecondPage.xaml:             <Button Name="GoBackButton" Width="200" Height="30" Content="Go Back" Click="Button_Click"></Button>   This provides a button to navigate back to the first page. 6.     Copy the following event handler code to the SecondPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {             NavigationService.GoBack();         } This tells the application to go back to the previously displayed page. 7.     Add the following code to the constructor in SecondPage.xaml.cs:             this.Loaded += new RoutedEventHandler(SecondPage_Loaded); 8.     Add the following loaded event handler to the SecondPage.xaml.cs file:         void SecondPage_Loaded(object sender, RoutedEventArgs e)         {             if (NavigationContext.QueryString["val"].Length > 0)                 MessageBox.Show(NavigationContext.QueryString["val"], "Data Passed", MessageBoxButton.OK);             else                 MessageBox.Show("{Empty}!", "Data Passed", MessageBoxButton.OK);         }   This code pops up a message box displaying either the text entered on the first page or the message “{Empty}!” if no text was entered. 9.     Run the application, enter some text in the text box and click on the next page button to see the application in action:   Congratulations!  You have created a new Windows Phone 7 application with page navigation.

    Read the article

  • How to reclaim storage for deleted LOBs

    - by Jim Hudson
    I have a LOB tablespace. Currently holding 9GB out of 12GB available. And, as far as I can tell, deleting records doesn't reclaim any storage in the tablespace. I'm getting worried about handling further processing. This is Oracle 11.1 and the data are in a CLOB and a BLOB column in the same table. The LOB Index segments (SYS_IL...) are small, all the storage is in the data segments (SYS_LOB...) We'e tried purge and coalesce and didn't get anywhere -- same number of bytes in user_extents. "Alter table xxx move" will work, but we'd need to have someplace to move it to that has enough space for the revised data. We'd also need to do that off hours and rebuild the indexes, of course, but that's easy enough. Copying out the good data and doing a truncate, then copying it back, will also work. But that's pretty much just what the "alter table" command does. Am I missing some easy ways to shrink things down and get the storage back? Or is "alter table xxx move" the best approach? Or is this a non-issue and Oracle will grab back the space from the deleted lob rows when it needs it?

    Read the article

  • Git subtree not properly using .gitignore when doing a partial clone

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). When I use git subtree split -P bib -b export git checkout export I get a the bib directory, plus all files that should have been ignored or considered binary based on .gitignore such as the src directory and everything in it that ends in a tilde or the ./data directory. dwickrama@DWwork:~/research/trunk$ ls * -r biblography.bib JabRef src: script1.sh~ README~ script2.sh~ script3.sh~ script4.R~ script5.awk~ script5.py~ cfg: cfgFile1.ini~ cfgFile2.ini~ cfgFile3.ini~ bin: bigBinaryPackage1 bigBinaryPackage2 dwickrama@DWwork:~/research/trunk$ My .gitignore file is as follows: *.doc diff=word *.tex diff=tex *.bib diff=bibtex *.py diff=python *.eps binary *.jpg binary *.png binary ./bin/* binary *~ How do I prevent this?

    Read the article

  • Calling a MVC2 partial view using jquery returns empty string problem

    - by Jason
    I have an issue where I have a partial view that returns some HTML to be displayed. Its called when something is clicked on the page using jquery. The problem is that no matter how I call it, i get back an empty string even though it reports success. This is happening to me using Chrome, going against my local machine. My controller looks like this: public ActionResult MyPartialView() { return PartialView(model); } I have tried jquery using .get(), .post() and .load() and all have the same results. Here is an example using .post(): $.post(url, function (data) { alert(data); }); The result always comes back as an empty string. I can navigate to the partial view in the browser manually and i get back the desired HTML. The URL I am using to call it I resolved fully so it looks like "http://localhost/controller/mypartialview" rather than using the relative path of "/controller/mypartialview" which I thought was the original problem. Any idea what may cause this?

    Read the article

  • How to use Parcel in Android?

    - by Mike
    I'm trying to use Parcel to write and then read back a Parcelable. For some reason, when I read the object back from the file, it's coming back as null. public void testFoo() { final Foo orig = new Foo("blah blah"); // Wrote orig to a parcel and then byte array final Parcel p1 = Parcel.obtain(); p1.writeValue(orig); final byte[] bytes = p1.marshall(); // Check to make sure that the byte array seems to contain a Parcelable assertEquals(4, bytes[0]); // Parcel.VAL_PARCELABLE // Unmarshall a Foo from that byte array final Parcel p2 = Parcel.obtain(); p2.unmarshall(bytes, 0, bytes.length); final Foo result = (Foo) p2.readValue(Foo.class.getClassLoader()); assertNotNull(result); // FAIL assertEquals( orig.str, result.str ); } protected static class Foo implements Parcelable { protected static final Parcelable.Creator<Foo> CREATOR = new Parcelable.Creator<Foo>() { public Foo createFromParcel(Parcel source) { final Foo f = new Foo(); f.str = (String) source.readValue(Foo.class.getClassLoader()); return f; } public Foo[] newArray(int size) { throw new UnsupportedOperationException(); } }; public String str; public Foo() { } public Foo( String s ) { str = s; } public int describeContents() { return 0; } public void writeToParcel(Parcel dest, int ignored) { dest.writeValue(str); } } What am I missing? UPDATE: To simplify the test I've removed the reading and writing of files in my original example.

    Read the article

  • How to list column headers of a SQL Server table using sp_help perhaps?

    - by Hamish Grubijan
    Hi, I have a few tables with 70-80 columns in them. I would like to populate them with somewhat random data, unless I will not be able to do so due to key violation, etc. The first step would be simply to get the list of all headers. There seem to be two ways: A) Run select * from table_of_interest; in MSFT SQL Server Management Studio 2008. Now, right-click the result and click "Copy With headers". However, I get zero rows back, and when I try to copy nothing + headers, I get: TITLE: Microsoft SQL Server Management Studio ------------------------------ Value cannot be null. Parameter name: data (System.Windows.Forms) ------------------------------ BUTTONS: OK ------------------------------ This looks like a bug ... anyhow ... there is another way. B) I can run sp_help table_of_interest;. However, I end up getting too much back. I get 7 different tables back but I am only interested in the second one. The columns of the second table are: Column_name | Type | Computed | Length | Prec | Scale | Nullable | TrimTrailingBlanks | FixedLenNullInSource | Collation I might be interested in just a Column_name and Type, but maybe other columns. So ... since sp_help probably runs a bunch of queries ... how do I get under the hood? How can I run the second query AND filter down the number of columns that I am interested in? Many Thanks!

    Read the article

  • How to find if an Oracle APEX session is expired

    - by Mathieu Longtin
    I have created a single-sign-on system for our Oracle APEX applications, roughly based on this tutorial: http://www.oracle.com/technology/oramag/oracle/09-may/o39security.html The only difference is that my master SSO login is in Perl, rather than another APEX app. It sets an SSO cookie, and the app can check if it's valid with a database procedure. I have noticed that when I arrive in the morning, the whole system doesn't work. I reload a page from the APEX app, it then sends me to the SSO page because the session was expired, I logon, and get redirected back to my original APEX app page. This usually works except first thing in the morning. It seems the APEX session is expired. In that case it seems to find the session, but then refuse to use it, and sends me back to the login page. I've tried my best to trace the problem. The "wwv_flow_custom_auth_std.is_session_valid" function returns true, so I'm assuming the session is valid. But nothing works until I remove the APEX session cookie. Then I can log back in easily. Anybody knows if there is another call that would tell me if the session is expired or not? Thanks

    Read the article

  • UIToolBar and iPhone Interface Orientation Problem

    - by Leo
    I am developing an application in iPhone. One view support orientation in portrait and landscape. I have two separate views for both orientation. This view has a UIToolbar at the top. Problem is when I change view back from landscape to portrait, UIToolbar at the top disappears. I want toolbar to come back in its original position when it is rotated back to portrait. This is what I am doing in my program. - (void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation) interfaceOrientation duration:(NSTimeInterval)duration { if (interfaceOrientation == UIInterfaceOrientationPortrait) { self.view = self.portrait; self.view.transform = CGAffineTransformIdentity; self.view.transform = CGAffineTransformMakeRotation(degreesToRadian(0)); self.view.bounds = CGRectMake(0.0, 0.0, 300.0, 480.0); } else if (interfaceOrientation == UIInterfaceOrientationLandscapeLeft) { self.view = self.landscape; self.view.transform = CGAffineTransformIdentity; self.view.transform = CGAffineTransformMakeRotation(degreesToRadian(-90)); self.view.bounds = CGRectMake(0.0, 0.0, 460.0, 320.0); } else if (interfaceOrientation == UIInterfaceOrientationPortraitUpsideDown) { self.view = self.portrait; self.view.transform = CGAffineTransformIdentity; self.view.transform = CGAffineTransformMakeRotation(degreesToRadian(180)); self.view.bounds = CGRectMake(0.0, 0.0, 300.0, 480.0); } else if (interfaceOrientation == UIInterfaceOrientationLandscapeRight) { self.view = self.landscape; self.view.transform = CGAffineTransformIdentity; self.view.transform = CGAffineTransformMakeRotation(degreesToRadian(90)); self.view.bounds = CGRectMake(0.0, 0.0, 460.0, 320.0); } } I don't know what am I missing here? Any help would be really appreciated.

    Read the article

  • Scrolling a Canvas smoothly in Android

    - by prepbgg
    I'm new to Android. I am drawing bitmaps, lines and shapes onto a Canvas inside the OnDraw(Canvas canvas) method of my view. I am looking for help on how to implement smooth scrolling in response to a drag by the user. I have searched but not found any tutorials to help me with this. The reference for Canvas seems to say that if a Canvas is constructed from a Bitmap (called bmpBuffer, say) then anything drawn on the Canvas is also drawn on bmpBuffer. Would it be possible to use bmpBuffer to implement a scroll ... perhaps copy it back to the Canvas shifted by a few pixels at a time? But if I use Canvas.drawBitmap to draw bmpBuffer back to Canvas shifted by a few pixels, won't bmpBuffer be corrupted? Perhaps, therefore, I should copy bmpBuffer to bmpBuffer2 then draw bmpBuffer2 back to the Canvas. A more straightforward approach would be to draw the lines, shapes, etc. straight into a buffer Bitmap then draw that buffer (with a shift) onto the Canvas but so far as I can see the various methods: drawLine(), drawShape() and so on are not available for drawing to a Bitmap ... only to a Canvas. Could I have 2 Canvases? One of which would be constructed from the buffer bitmap and used simply for plotting the lines, shapes, etc. and then the buffer bitmap would be drawn onto the other Canvas for display in the View? I should welcome any advice! Answers to similar questions here (and on other websites) refer to "blitting". I understand the concept but can't find anything about "blit" or "bitblt" in the Android documentation. Are Canvas.drawBitmap and Bitmap.Copy Android's equivalents?

    Read the article

  • UINavigationController crash because of pushing and poping UIViewControllers

    - by Wayne Lo
    My question is related to my discovery of a reason for UINavigationController to crash. So I will tell you about the discovery first. Please bare with me. The issue: I have a UINavigationController as as subview of UIWindow, a rootViewController class and a custom MyViewController class. The following steps will get a Exc_Bad_Access, 100% reproducible.: [myNaviationController pushViewController:myViewController_1stInstance animated:YES]; [myNaviationController pushViewController:myViewController_2ndInstance animated:YES]; Hit the left back tapBarItem twice (pop out two of the myViewController instances) to show the rootViewController. After a painful 1/2 day of try and error, I finally figure out the answer but also raise a question. The Solutio: I declared many objects in the .m file as a lazy way of declaring private variables to avoid cluttering the .h file. For instance, #impoart "MyViewController.h" NSMutableString*variable1; @implement ... -(id)init { ... varialbe1=[[NSMutableString alloc] init]; ... } -(void)dealloc { [variable1 release]; } For some reasons, the iphone OS may loose track of these "lazy private" variables memory allocation when myViewController_1stInstance's view is unloaded (but still in the navigation controller's stacks) after loading the view of myViewController_2ndInstance. The first time to tap the back tapBarItem is ok since myViewController_2ndInstance'view is still loaded. But the 2nd tap on the back tapBarItem gave me hell because it tried to dealloc the 2nd instance. Executing [variable release] resulted in Exc_Bad_Access because it pointed randomly (loose pointer). To fix this problem is simple, declare variable1 as a @private in the .h file. Here is my Question: I have been using the "lazy private" variables for quite some time without any issues until they are involved in UINavigationController. Is this a bug in iPhone OS? Or there is a fundamental misunderstanding on my part about Objective C? Please help.

    Read the article

  • Microsoft Surface - Flip & Scatter View Items

    - by Angelus
    Hi Guys, I'm currently trying to get flip to work with scatterview items, and I'm having some trouble conceptually with it, using a plugin called Thriple (http://thriple.codeplex.com/). Essentially, a 2 sided thriple control looks like this: <thriple:ContentControl3D xmlns:thriple="http://thriple.codeplex.com/" Background="LightBlue" BorderBrush="Black" BorderThickness="2" MaxWidth="200" MaxHeight="200" > <thriple:ContentControl3D.Content> <Button Content="Front Side" Command="thriple:ContentControl3D.RotateCommand" Width="100" Height="100" /> </thriple:ContentControl3D.Content> <thriple:ContentControl3D.BackContent> <Button Content="Back Side" Command="thriple:ContentControl3D.RotateCommand" Width="100" Height="100" /> </thriple:ContentControl3D.BackContent> </thriple:ContentControl3D> What I'm struggling to grasp is if I should be making 2 separate ScatterView templates to bind to the data I want, and then each one would be the "front" and "back" of a scatterview item OR should i make 2 separate ScatterView items which are bound to the data I want, which is then bound to the "back" and "front" of a main ScatterView item? If there is a better way of using doing flip animations with ScatterViewItem's, that'd be cool too! Thanks!

    Read the article

  • Median Filter a bi-level image with JAI

    - by Mark
    I'd like to apply a Median Filter to a bi-level image and output a bi-level image. The JAI median filter seems to output an RGB image, which I'm having trouble downconverting back to bi-level. Currently I can't even get the image back into gray color-space, my code looks like this: BufferedImage src; // contains a bi-level image ParameterBlock pb = new ParameterBlock(); pb.addSource(src); pb.add(MedianFilterDescriptor.MEDIAN_MASK_SQUARE); pb.add(3); RenderedOp result = JAI.create("MedianFilter", pb); ParameterBlock pb2 = new ParameterBlock(); pb2.addSource(result); pb2.add(new double[][]{{0.33, 0.34, 0.33, 0}}); RenderedOp grayResult = JAI.create("BandCombine", pb2); BufferedImage foo = grayResult.getAsBufferedImage(); This code hangs on the grayResult line and appears not to return. I assume that I'll eventually need to call the "Binarize" operation in JAI. Edit: Actually, the code appears to be stalling once I call getAsBufferedImage(), but returns nearly instantly when the second operation ("BandCombine") is removed. Is there a better way to keep the Median Filtering in the source color domain? If not, how do I downconvert back to binary?

    Read the article

  • Android: Dialog themed activity not visible

    - by Vincent
    I have an activity which, when started, needs to check if the user is authenticated. If not, I need to display an interface to authenticate. I do this with another activity, which has a dialog theme, and I start it in onResume() with flags NO_HISTORY and EXCLUDE_FROM_RECENTS. Everything works fine when starting the application for the first time. But I have a feature that resets login after some time, if the user is not in an activity. When I test this, I start the applicatio, enter the password, then move back to home. Then when I enter the application again, the background darkens as if the dialog would show, but it doesn't. At this point, if I press the back button, the darkening from the background activity disappears for a second, then the dialog finally appears. I used logcat to investigate the case, and the activity lifecycle functions get called properly: //For the first start: onCreate background activity onStart background activity onResume background activity onPause background activity onCreate dialog onStart dialog onResume dialog //Enter password onPause dialog onResume background activity onStop dialog onDestroy dialog //navigating to homescreen onPause background activity onStop background activity //starting again onRestart background activity onStart background activity onResume background activity onPause background activity onCreate dialog onStart dialog onResume dialog //no dialog shown, only darkened background activity recieving no input //pressing back button onPause dialog onResume background activity onPause background activity onCreate NEW dialog onStart NEW dialog onResume NEW dialog onStop OLD dialog onDestroy OLD dialog //now the dialog is properly shown //entering password onPause NEW dialog onResume background activity onStop NEW dialog onDestroy NEW dialog Using the SINGLE_TOP flag makes no change. However, if I remove the dialog theme from the dialog activity, it IS shown after the restart. So far I didn't want to use a Dialog instead of an Activity, because I consider them problematic sometimes and less encapsulated and this part has to be quite secure. You may be able to convince me though.. Thank you in advance!

    Read the article

  • WMD markdown editor - HTML to Markdown conversion

    - by gg1
    I am using wmd markdown editor on a project and had a question: When I post the form containing the markdown text area, it (as expected) posts html to the server. However, say upon server-side validation something fails and I need to send the user back to edit their entry, is there anyway to refill the textarea with just the markdown and not the html? Since as I have it set up, the server only has access to the post data (which is in the form of html) so I can't seem to think of a way to do this. Any ideas? Preferably a non-javascript based solution. Update: I found an html to markdown converter called markdownify. I guess this might be the best solution for displaying the markdown back to the user...any better alternatives are welcome! Update 2: I found this post on SO and I guess there is an option to send the data to the server as markdown instead of html. Are there any downsides to simply storing the data as markdown in the database? What about displaying it back to the user (outside of an editor)? Maybe it would be best to post both versions (html AND markdown) to the server... SOLVED: I can simply use php markdown to convert the markdown to html serverside.

    Read the article

  • dll's loaded through reflection - Phantom Bug

    - by Seattle Leonard
    Ok, I got a strange one here and I want to know if any of you have ever run accross anything like it. So I've got this web app that loads up a bunch of dll's through reflection. Basically it looks for Types that are derived from certain abstract types and adds them to the list of things it can make. Here's the weird part. While developing there is never a problem. When installing it, there is never a problem to start with. Then, at a seemingly random time, the application breaks while trying to find all the types. I've had 2 sites sitting side by side and one worked while the other did not, and they were configured exactly(and I mean exactly) the same. IISRESET's never helped, but this did: I simply moved all the dll's out of the bin directory then moved them back. That's right I just moved them out of the bin directory then put them right back where they came from and everything worked fine. Any ideas? Got some more info When the site is working I notice this behavior: After IISRESET it still works, but recycling the app pool will cause it to break. When the site is broken: Neiter IISRESET nor recycling the app pool fixes it, but moving a single dll out then back in fixes it.

    Read the article

  • Strange - Clicking Update button doesn’t cause a postback due to <!-- tag

    - by AspOnMyNet
    If I define the following template inside DetailsView, then upon clicking an Update or Insert button, the page is posted back to the server: <EditItemTemplate> <asp:TextBox ID="txtDate" runat="server" Text='<%# Bind("Date") %>'></asp:TextBox> <asp:CompareValidator ID="valDateType" runat="server" ControlToValidate="txtDate" Type="Date" Operator="DataTypeCheck" Display="Dynamic" >*</asp:CompareValidator> </EditItemTemplate> If I remove CompareValidator control from the above code by simply deleting it, then page still gets posted back.But if instead I remove CompareValidator control by enclosing it within <!-- --> tags, then for some reason clicking an Update or Insert button doesn’t cause a postback...instead nothing happens: <EditItemTemplate> <asp:TextBox ID="txtDate" runat="server" Text='<%# Bind("Date") %>'></asp:TextBox> <!-- <asp:CompareValidator ID="valDateType" runat="server" ControlToValidate="txtDate" Type="Date" Operator="DataTypeCheck" Display="Dynamic" >*</asp:CompareValidator> --> </EditItemTemplate> </EditItemTemplate> Any idea why page doesn't get posted back? thanx

    Read the article

  • Bring Winforms control to front

    - by Nathan
    Are there any other methods of bringing a control to the front other than control.BringToFront()? I have series of labels on a user control and when I try to bring one of them to front it is not working. I have even looped through all the controls and sent them all the back except for the one I am interested in and it doesn't change a thing. Here is the method where a label is added to the user control private void AddUserLabel() { UserLabel field = new UserLabel(); ++fieldNumber; field.Name = "field" + fieldNumber.ToString(); field.Top = field.FieldTop + fieldNumber; field.Left = field.FieldLeft + fieldNumber; field.Height = field.FieldHeight; field.Width = field.FieldWidth; field.RotationAngle = field.FieldRotation; field.Barcode = field.BarCoded; field.HumanReadable = field.HumanReadable; field.Text = field.FieldText; field.ForeColor = Color.Black; field.MouseDown += new MouseEventHandler(label_MouseDown); field.MouseUp += new MouseEventHandler(label_MouseUp); field.MouseMove += new MouseEventHandler(label_MouseMove); userContainer.Controls.Add(field); SendLabelsToBack(); //Send All labels to back userContainer.Controls[field.FieldName].BringToFront(); } Here is the method that sends all of them to the back. private void SendLabelsToBack() { foreach (UserLabel lbl in userContainer.Controls) { lbl.SendToBack(); } }

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >