Search Results

Search found 17788 results on 712 pages for 'last'.

Page 50/712 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Navigating through code with keyboard shortcuts

    - by MarceloRamires
    I'm starting to feel the need to run fastly through code with keyboard shortcuts, to arrive faster where I want to make any changes (avoiding use of mouse or long times holding [up], [left], [right] and [down]). I'm already using some: [home] - first position in current line [end] - last position in current line [ctrl] + [home] - first line of the entire code [ctrl] + [end] - last line of the entire code [pageup] - same vertical position, one screen above [pagedown] - same vertical position, one screen below [ctrl] + [pageup] - first line in current screen [ctrl] + [end] - last line in current screen [ctrl] + [left/right] - skipping word per word What have you got ? I use Visual Studio. (but I'm open to any answer, as I maybe can use others soon) obs: I've searched through stackoverflow and didn't find a nice question with this content, nor a list of keyboard code searching. If it's repeated, I'm sorry for not finding it, I'm here in my best intentions. This question is NOT about any shortcuts, and not only about visual studio, it's about running through code with shortcuts. Answers that suit the question so far: [Ctrl] + [-] - jumps to last cursor position [Ctrl] + [F3] - Jumps to next occurance of the word the curson is in [Shift] + [F3] - Same as the above, backwards. [F12] - Goes to definition of method/variable the cursor is in [Ctrl] + [ ] ] - Jumps to matching brace and select I'll ad more as there are answers.

    Read the article

  • android: consume key press, bypassing framework processing

    - by user360024
    What I want android to do: when user presses a single key, have the view respond, but do so without opening a text area and displaying the character associated with the key that was pressed, and without requiring that the Enter key be pressed, and without requiring that the user press Esc to make the text area go away. For example, when user presses "u" (and doesn't press Enter), that means "undo the last action", so the controller and model immediately undo the last action, then the view does an invalidate() and user sees that their last action has been undone. In other words the "u" key press should be silently processed, such that the only visual result is that user's last action has been undone. I've implemented OnKeyListener and provided an onKey() method: the class: public class MyGameView extends View implements OnKeyListener{ in the constructor: //2010jun06, phj: With onKey(), helps let this View consume key presses // before the framework gets a chance to consume the key press. setOnKeyListener((View.OnKeyListener)this); the onKey() method: public boolean onKey(View v, int keyCode, KeyEvent event) { if (keyCode == KeyEvent.KEYCODE_R) { Log.d("BWA", "In onKey received keycode associated with R."); } return true; // meaning the event (key press) has been consumed, so // the framework should not handle this event. } but when user presses "u" key on the emulator keypad, a textarea is opened at the bottom of the screen, the "u" charater is displayed there, and the onKey() method doesn't execute until user presses the Enter key. Is there a way to make android do what I want? Thanks,

    Read the article

  • jQuery + CSS Manipulation using each()

    - by atif089
    Hi, I am trying to create a css menu. I would like to remove border and padding for every last li > a element on the child ul tag. (sorry for confusing) Here is the code for the menu <ul id="nav"> <li><a id="cat1" href="#">Menu 1</a></li> <li><a id="cat2" href="#">Menu 2</a></li> <li><a id="cat3" href="#">Menu 3</a> <ul> <li><a href="#">Sub Item 1 - M3</a></li> </ul> </li> <li><a id="cat4" href="#">Menu 4</a> <ul> <li><a href="#">Sub Item 2 - M3</a></li> <li><a href="#">Sub Item 3 - M3</a></li> <li><a href="#">Sub Item 4 - M3</a></li> </ul> </li> </ul> The code must remove border and padding for last elements that is <li><a href="#">Sub Item 1 - M3</a></li> <li><a href="#">Sub Item 4 - M3</a></li> Tried this but it takes off border for only sub item 4 $("#nav li ul").each(function(index) { $("#nav li ul > :last a").css('border','0 none'); $("#nav li ul > :last a").css('padding','0'); });

    Read the article

  • Help with active record relations

    - by Christian Fazzini
    class CreateActivities < ActiveRecord::Migration def self.up create_table :activities do |t| t.references :user t.references :media t.integer :artist_id t.string :type t.timestamps end end def self.down drop_table :activities end end class Fan < Activity belongs_to :user, :counter_cache => true end class Activity < ActiveRecord::Base belongs_to :user belongs_to :media belongs_to :artist, :class_name => 'User', :foreign_key => 'artist_id' end class User < ActiveRecord::Base has_many :activities has_many :fans end I tried changing my activity model too, without any success: class Activity < ActiveRecord::Base has_many :activities, :class_name => 'User', :foreign_key => 'user_id' has_many :activities, :class_name => 'User', :foreign_key => 'artist_id' end One thing to note. Activity is an STI. Fan inherits from Activity. In console, I do: # Create a fan object. User is a fan of himself fan = Fan.new => #<Fan id: nil, user_id: nil, media_id: nil, artist_id: nil, type: "Fan", comment: nil, created_at: nil, updated_at: nil> # Assign a user object fan.user = User.first => #<User id: 1, genre_id: 1, country_id: 1, .... # Assign an artist object fan.artist_id = User.first.id => 1 # Save the fan object fan.save! => true Activity.last => #<Fan id: 13, user_id: 1, media_id: nil, artist_id: 1, type: "Fan", comment: nil, created_at: "2010-12-30 08:41:25", updated_at: "2010-12-30 08:41:25"> Activity.last.user => #<User id: 1, genre_id: 1, country_id: 1, ..... But... Activity.last.artist => nil Why is Activity.last.artist returning nil?

    Read the article

  • Assigning a value to an integer in a C linked list

    - by Drunk On Java
    Hello all. I have a question regarding linked lists. I have the following structs and function for example. struct node { int value; struct node *next; }; struct entrynode { struct node *first; struct node *last; int length; }; void addnode(struct entrynode *entry) { struct node *nextnode = (struct node *)malloc(sizeof(struct node)); int temp; if(entry->first == NULL) { printf("Please enter an integer.\n"); scanf("%d", &temp); nextnode->value = temp; nextnode->next = NULL; entry->first = nextnode; entry->last = nextnode; entry->length++; } else { entry->last->next = nextnode; printf("Please enter an integer.\n"); scanf("%d", nextnode->value); nextnode->next = NULL; entry->last = nextnode; entry->length++; } } In the first part of the if statement, I store input into a temp variable and then assign that to a field in the struct. The else branch, I tried to assign it directly which did not work. How would I go about assigning it directly? Thanks for your time.

    Read the article

  • Is there any better way to capture the screen than PIL.ImageGrab.grab()?

    - by user1474837
    I am making a screen capture program with python. My current problem is PIL.ImageGrab.grab() gives me the same output as 2 seconds later. For instance, for I think I am not being clear, in the following program, almost all the images are the same, have the same Image.tostring() value, even though I was moving my screen during the time the PIL.ImageGrab.grab loop was executing. >>> from PIL.ImageGrab import grab >>> l = [] >>> import time >>> for a in l: l.append(grab()) time.sleep(0.01) >>> for a in range(0, 30): l.append(grab()) time.sleep(0.01) >>> b = [] >>> for a in l: b.append(a.tostring()) >>> len(b) 30 >>> del l >>> last = [] >>> a = 0 >>> a = -1 >>> last = "" >>> same = -1 >>> for pic in b: if b == last: same = same + 1 last = b >>> same 28 >>> This is a problem, as all the images are the same but 1. 1 out of 30 is different. That would make for a absolutly horrable quality video. Please, tell me if there is any better quality alternative to PIL.ImageGrab.grab(). I need to capture the whole screen. Thanks!

    Read the article

  • On counting pairs of words that differ by one letter

    - by Quintofron
    Let us consider n words, each of length k. Those words consist of letters over an alphabet (whose cardinality is n) with defined order. The task is to derive an O(nk) algorithm to count the number of pairs of words that differ by one position (no matter which one exactly, as long as it's only a single position). For instance, in the following set of words (n = 5, k = 4): abcd, abdd, adcb, adcd, aecd there are 5 such pairs: (abcd, abdd), (abcd, adcd), (abcd, aecd), (adcb, adcd), (adcd, aecd). So far I've managed to find an algorithm that solves a slightly easier problem: counting the number of pairs of words that differ by one GIVEN position (i-th). In order to do this I swap the letter at the ith position with the last letter within each word, perform a Radix sort (ignoring the last position in each word - formerly the ith position), linearly detect words whose letters at the first 1 to k-1 positions are the same, eventually count the number of occurrences of each letter at the last (originally ith) position within each set of duplicates and calculate the desired pairs (the last part is simple). However, the algorithm above doesn't seem to be applicable to the main problem (under the O(nk) constraint) - at least not without some modifications. Any idea how to solve this?

    Read the article

  • Learning Python else syntax error

    - by user1441016
    Hi I am learning python by doing the practice problems for Open course at MIT 6.00 Intro to Computer Science. I am Trying to do practice problem 1 part 2 create a recursive function to count the instance of key in target. My code so far... from string import * def countSubStringMatchRecursive (target, key,x,s): if (find(target,key)==find(target,key,s)) and (find(target,key)==find(target,key,(find(target,key)))):#if first and last return (1) elif (find(target,key)==find(target,key,s))and (find(target,key)!=find(target,key,(find(target,key)))):#if first but not last x=1 s= find(target,key) return (countSubStringMatchRecursive(target,key,s,x) elif (find(target,key,s))==-1 and (find(target,key)!=find(target,key,s)):#if last but not first return (x+1) elif:(find(target,key,s))!=-1 and (find(target,key)!=find(target,key,s)):#if not last and not first x=x+1 s= find(target,key,s) return (countSubStringMatchRecursive(target,key,s,x) I getting a syntax error at line 8. I would just like to know what I did wrong there. Dont worry about the other mistakes I should be able to get those sorted out. I just Stuck on this. Thanks.

    Read the article

  • java packets byte

    - by user303289
    Guys, I am implementing a protocol in one of the wireless project. I am stucked at one point. In of the java file i am suppose to receive a packet and that packet is 12 byte packet and I have to write different functions for reading different parts of packets and convert it to diferent type. Like I want first four byte in one of the function and convert it to int, next two bytes in string. and again next two in string, last two hop in string and followed by last two int. I want follwing function to implement: // here is the interface /* FloodingData should use methods defined in this class. */ class FloodingPacket{ public static void main(String arg[]){ byte FloodingPack[]; // just for example to test in code FloodingPack=new byte[12]; interface IFloodingPacket { // Returns the unique sequence number for the packet int getSequenceNumber() ; // Returns the source address for the packet String getSourceAddress(); // Returns the destination address for the packet String getDestinationAddress(); // Returns the last hop address for the packet String getLastHopAddress(); // Sets the last hop address to the address of the node // which the packet was received from void updateLastHopAddress(); // Returns the entire packet in bytes (for sending) byte[] getBytes(); // Sets the bytes of the packet (for receiving) void setBytes(byte[] packet); }

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Merge replication stopping without errors in SQL 2008 R2

    - by Rob Farley
    A non-SQL MVP friend of mine, who also happens to be a client, asked me for some help again last week. I was planning on writing this up even before Rob Volk (@sql_r) listed his T-SQL Tuesday topic for this month. Earlier in the year, I (well, LobsterPot Solutions, although I’d been the person mostly involved) had helped out with a merge replication problem. The Merge Agent on the subscriber was just stopping every time, shortly after it started. With no errors anywhere – not in the Windows Event Log, the SQL Agent logs, not anywhere. We’d managed to get the system working again, but didn’t have a good reason about what had happened, and last week, the problem occurred again. I asked him about writing up the experience in a blog post, largely because of the red herrings that we encountered. It was an interesting experience for me, also because I didn’t end up touching my computer the whole time – just tapping on my phone via Twitter and Live Msgr. You see, the thing with replication is that a useful troubleshooting option is to reinitialise the thing. We’d done that last time, and it had started to work again – eventually. I say eventually, because the link being used between the sites is relatively slow, and it took a long while for the initialisation to finish. Meanwhile, we’d been doing some investigation into what the problem could be, and were suitably pleased when the problem disappeared. So I got a message saying that a replication problem had occurred again. Reinitialising wasn’t going to be an option this time either. In this scenario, the subscriber having the problem happened to be in a different domain to the publisher. The other subscribers (within the domain) were fine, just this one in a different domain had the problem. Part of the problem seemed to be a log file that wasn’t being backed up properly. They’d been trying to back up to a backup device that had a corruption, and the log file was growing. Turned out, this wasn’t related to the problem, but of course, any time you’re troubleshooting and you see something untoward, you wonder. Having got past that problem, my next thought was that perhaps there was a problem with the account being used. But the other subscribers were using the same account, without any problems. The client pointed out that that it was almost exactly six months since the last failure (later shown to be a complete red herring). It sounded like something might’ve expired. Checking through certificates and trusts showed no sign of anything, and besides, there wasn’t a problem running a command-prompt window using the account in question, from the subscriber box. ...except that when he ran the sqlcmd –E –S servername command I recommended, it failed with a Named Pipes error. I’ve seen problems with firewalls rejecting connections via Named Pipes but letting TCP/IP through, so I got him to look into SQL Configuration Manager to see what kind of connection was being preferred... Everything seemed fine. And strangely, he could connect via Management Studio. Turned out, he had a typo in the servername of the sqlcmd command. That particular red herring must’ve been reflected in his cheeks as he told me. During the time, I also pinged a friend of mine to find out who I should ask, and Ted Kruger (@onpnt) ‘s name came up. Ted (and thanks again, Ted – really) reconfirmed some of my thoughts around the idea of an account expiring, and also suggesting bumping up the logging to level 4 (2 is Verbose, 4 is undocumented ridiculousness). I’d just told the client to push the logging up to level 2, but the log file wasn’t appearing. Checking permissions showed that the user did have permission on the folder, but still no file was appearing. Then it was noticed that the user had been switched earlier as part of the troubleshooting, and switching it back to the real user caused the log file to appear. Still no errors. A lot more information being pushed out, but still no errors. Ted suggested making sure the FQDNs were okay from both ends, in case the servers were unable to talk to each other. DNS problems can lead to hassles which can stop replication from working. No luck there either – it was all working fine. Another server started to report a problem as well. These two boxes were both SQL 2008 R2 (SP1), while the others, still working, were SQL 2005. Around this time, the client tried an idea that I’d shown him a few years ago – using a Profiler trace to see what was being called on the servers. It turned out that the last call being made on the publisher was sp_MSenumschemachange. A quick interwebs search on that showed a problem that exists in SQL Server 2008 R2, when stored procedures have more than 4000 characters. Running that stored procedure (with the same parameters) manually on SQL 2005 listed three stored procedures, the first of which did indeed have more than 4000 characters. Still no error though, and the problem as listed at http://support.microsoft.com/kb/2539378 describes an error that should occur in the Event log. However, this problem is the type of thing that is fixed by a reinitialisation (because it doesn’t need to send the procedure change across as a transaction). And a look in the change history of the long stored procs (you all keep them, right?), showed that the problem from six months earlier could well have been down to this too. Applying SP2 (with sufficient paranoia about backups and how to get back out again if necessary) fixed the problem. The stored proc changes went through immediately after the service pack was applied, and it’s been running happily since. The funny thing is that I didn’t solve the problem. He had put the Profiler trace on the server, and had done the search that found a forum post pointing at this particular problem. I’d asked Ted too, and although he’d given some useful information, nothing that he’d come up with had actually been the solution either. Sometimes, asking for help is the most useful thing you can do. Often though, you don’t end up getting the help from the person you asked – the sounding board is actually what you need. @rob_farley

    Read the article

  • Tracking down rogue disk usage

    - by Amadan
    I found several other questions regarding the theory behind my problem (e.g. this, this), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G (du) and 345G (df). Where are the remaining 334G? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) I rebooted to see if fsck does anything. But, from /var/log/boot.log, it seems there are no issues: /dev/mapper/server-root: clean, 3936097/32768000 files, 125368568/131064832 blocks Thinking maybe someone overzealously reserved root space, I checked the master record: # tune2fs -l /dev/mapper/server-root tune2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 86430ade-cea7-46ce-979c-41769a41ecbe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131064832 Reserved block count: 6553241 Free blocks: 5696264 Free inodes: 28831903 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 1 13:44:04 2013 Last mount time: Tue Aug 19 16:56:13 2014 Last write time: Fri Feb 1 13:51:28 2013 Mount count: 9 Maximum mount count: -1 Last checked: Fri Feb 1 13:44:04 2013 Check interval: 0 (<none>) Lifetime writes: 1215 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28836028 Default directory hash: half_md4 Directory Hash Seed: bca55ff5-f530-48d1-8347-25c004f66d43 Journal backup: inode blocks The system is: # uname -a Linux server 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Does anyone have any tips on what exactly to do to find and hopefully reclaim the missing space?

    Read the article

  • Clear Type problem in Windows 7

    - by Florin Sabau
    I try to tune ClearType in Windows 7 x64 using the ClearType Text Tuner. I can choose whatever options I want on the first 3 pages, but on the last page, whatever I choose is reverted as soon as I click finish. Next time I run the tuner I can see that the second option is selected, not the option that I wanted (the last one). Has anybody else found this odd behavior?

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Using a pre-existing function for a new row

    - by Jonathan Kushner
    I have an Excel document that contains X columns and N number of rows. The very last column of a row performs a SUM of the first X-1 columns. The problem I have is, the user of this Excel document progressively adds rows to the document, and because of this, the function does not exist yet in the last column for new rows. I need a way to have this function exist in new rows dynamically (the user is not Excel-savvy and doesn't have the ability to just drag the function down a row).

    Read the article

  • Why isn't this rewrite rule (nginx) applied? (trying to setup Wordpress multisite)

    - by Brian Park
    Hi, I'm trying to setup Wordpress multisite (subfolder structure) with nginx, but having a problem with this rewrite rule. Below is the Apache's .htaccess, which I have to translate into nginx configuration. RewriteEngine On RewriteBase /blogs/ RewriteRule ^index\.php$ - [L] # uploaded files RewriteRule ^([_0-9a-zA-Z-]+/)?files/(.+) wp-includes/ms-files.php?file=$2 [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Below is what I came up with: server { listen 80; server_name example.com; server_name_in_redirect off; expires 1d; access_log /srv/www/example.com/logs/access.log; error_log /srv/www/example.com/logs/error.log; root /srv/www/example.com/public; index index.html; try_files $uri $uri/ /index.html; # rewriting uploaded files rewrite ^/blogs/(.+/)?files/(.+) /blogs/wp-includes/ms-files.php?file=$2 last; # add a trailing slash to /wp-admin rewrite ^/blogs/(.+/)?wp-admin$ /blogs/$1wp-admin/ permanent; if (!-e $request_filename) { rewrite ^/blogs/(.+/)?(wp-(content|admin|includes).*) /blogs/$2 last; rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; } location /blogs/ { index index.php; #try_files $uri $uri/ /blogs/index.php?q=$uri&$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example.com/public$fastcgi_script_name; } # static assets location ~* ^.+\.(manifest)$ { access_log /srv/www/example.com/logs/static.log; } location ~* ^.+\.(ico|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { # only set expires max IFF the file is a static file and exists if (-f $request_filename) { expires max; access_log /srv/www/example.com/logs/static.log; } } } In the above code, I believe rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; has no effect because when I look at the access_log file, I see the following line: 2010/09/15 01:14:55 [error] 10166#0: *8 "/srv/www/example.com/public/blogs/test/index.php" is not found (2: No such file or directory), request: "GET /blogs/test/ HTTP/1.1" (Here, 'test' is the second blog created using multisite feature) What I'm expecting is that /blogs/test/index.php gets rewritten to /blogs/index.php, but it doesn't seem to do that... Am I overlooking something obvious? Thanks!

    Read the article

  • Windows Server 2008 R2 - 180 day evaluation vs. 10 days + 5 re-arms

    - by Rob
    The content on the Server 2008 R2 Trial Software page states that it can be evaluated for upto 180 days, however on a test machine we installed last week, it's requesting "re-arming" every 10 days, which seems to be do-able a maximum of 5 times? How do we get it to last more than 50 days, as it'd be a pain to have to rebuild the server concerned!

    Read the article

  • incorrect password when computer is locked

    - by cyntaxx
    Hi there, I have running a Windows XP SP3 Machine and I can't login after I have locked my Workstation. I changed my password and installed last Updates from Microsoft last Friday. When Windows comes up, there is no problem to login. But after I locked it, it tells me that my password is wrong. I pushed the client again into the domain, but doesn't help. Thanks, cyntaxx

    Read the article

  • Firefox: Where does firefox store the opened windows/tabs/urls on Crash for Restoring?

    - by jens
    Hello in which location and file does firefox save, the last windows I had opened (when firefox crashed). I have a complete "hot dump" copy of a file system and need to restore the state firefox was when the system crasehd but I cant not restore the full backupitself. I can only extract the files of firefox, but I do not know in which files i have to search for the urls that were last opened when the snapshop of the whole filesystem was done. thanks!!!

    Read the article

  • Apache stopped serving all sites

    - by user36158
    Hi Everyone Hope you can help me, up until last night all sites on my server were displaying fine but now whenever you visit any of them you get the default - "Welcome to Your New Home in Cyberspace!" page - all the domains are setup right, have been working and were working until last night and i haven't edited any of the apache files so i really can't see why they have broken, i am using Debian and all the sites have been activated Really hope someone can help me David

    Read the article

  • Sharepoint Discussion Board w/ attachments expiration

    - by Mike
    I want to set a retention policy (DB Settings - Information Management Policy Settings) on a discussion board, but does the attachment get deleted as well? Also, I have a discussion board retention policy right now that isn't working properly. The criteria is: Last Updated + 30 days Delete There are plenty of dicussion items that are long past "Last Updated". Any ideas why?

    Read the article

  • Zipping folder with absolute path without keeping tree of folders

    - by Preston
    I am attempting to use the zip -r command to zip a folder which includes two files. I need to pass the absolute path of the folder with two files (/path/to/my/files/), which is causing all of the folders to be zipped with it, where as I only need the last folder (files/) and its contents to be zipped, so that when the file is unzipped, there is only one folder and the two files within it. How can I modify the command to be able to pass the absolute paths in the arguments while keeping only the last folder?

    Read the article

  • Is there a system monitoring tool that lets me write complex queries against the data?

    - by benhsu
    I am looking for a system stat collection tool that will let me write queries against the data collected. I am planning to answer questions like: what is the average load, over the last 30 days, on this machine between 9AM and 5PM, as opposed to at night what was the average disk io on these 10 machines yesterday what was the average daytime memory usage on these 10 machines last week, as opposed to 2 weeks ago Has anyone done this with, say, collectd or graphite?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >