Search Results

Search found 27581 results on 1104 pages for 'execute command'.

Page 273/1104 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Can I do filename pattern matching in a bash script?

    - by Bob Bowden
    Can I do filename pattern matching in a bash script? "test" is a directory with the following files ... bob@bob-laptop:~/test$ ls exclude exclude1 exclude2 include1 include2 from the command line, if I want to exclude some of the files, I can do ... bob@bob-laptop:~/test$ echo !(exclude*) include1 include2 but, if I put that command in a script (named exclude) ... bob@bob-laptop:~/test$ cat exclude echo !(exclude*) when I execute it, I get an error ... bob@bob-laptop:~/test$ ./exclude ./exclude: line 1: syntax error near unexpected token (' ./exclude: line 1:echo !(exclude*)' I've tried every (I think) variation of escaping some, all or none of the special characters and I still get an error. What am I missing here? If I can't do this, would someone please be so kind as to explain why?

    Read the article

  • SQL SERVER – The Story of a Lesser Known Startup Parameter in SQL Server – Guest Post by Balmukund Lakhani

    - by Pinal Dave
    This is a fantastic blog post from my dear friend Balmukund ( blog | twitter | facebook ). He had presented a fantastic session in our last UG and there were lots of requests from attendees that he blogs about it. Well, here is the blog post about the same very popular UG session. Let us read the entire blog post in the voice of the Balmukund himself. During my last session in SQL Bangalore User Group (Facebook) meeting, I was lucky enough to deliver a session on SQL Server Startup issue. The name of the session was “SQL Engine Starting Trouble – How to start?” From the feedback, I realized that one of the “not well known” startup parameter is “-m”. Okay, you might say “I know that this is used to start the SQL in single user mode”. But what you might not know is that you can pass a string with -m which has special meaning and use. I have used this parameter in my blog here but looks like not many of you have seen that. It happens most of the time when we want to start SQL Server in single user mode, someone else makes connection before you can. The only choice you have is to repeat same process again till you succeed. Some smart DBAs may disable the remote network protocols (TCP/IP and Named Pipes) of SQL Instance and allow only local connections to SQL. Once the activity is complete, our dear smart DBA has to remember to re-enable network protocols. Sometimes, it may be a local service or application getting connection to SQL before we can. There is a better way to deal with it. Yes, you have guessed it correctly: -m parameter which a string. Since I work with SQL Product Support team, I may know little more undocumented commands and parameters, but this is not an undocumented stuff. It’s already documented in books online. So in this blog, I am going to show a demo of its usage. As documentation shows, “Do not use this option as a security feature.” So please read this blog as knowledge enhancer and troubleshooting issues not security feature. In my laptop, I have a default instance of SQL Server 2012 and here is what we would in the configuration manager. Now, I would go ahead and stop SQL Service by selecting SQL Server (MSSQLServer) > Right Click > Stop. There are multiple ways to start SQL with startup parameter. 1) Use Net Start Command from command prompt Net Start MSSQLServer /mSQLCMD The above command is the simplest way to add startup parameter to SQL. This parameter would be cleared once we stop and start SQL. 2) Add Startup Parameter via configuration manager. Step is already listed here. We need to add -mSQLCMD If we compare 1 and 2, it’s clear that unless we modify startup parameter and remove -m, it would be in effect. 3) Start SQL Service via command line SQLServr.exe –mSQLCMD –s<InstanceName> Wait, what does SQLCMD mean with /m? It’s the instruction to SQL that start SQL Server in Single User Mode and allow only the application which is SQLCMD. Any other application would fail with Login Failed for User Error message. It would be important to note that string is case sensitive. This value should be picked up from application_name column from sys.dm_exec_sessions. I have made a connection using SQLCMD and as we can see it comes as upper case “SQLCMD”. If we want only management studio query windows to connect then we need to give -m” Microsoft SQL Server Management Studio – Query” as startup parameter. In below example, I have given it as SQLCMd (lower case d at the end) and we would notice that we would not be able to connect to SQL Instance. Above proves that parameter works as expected and it’s case sensitive. Error Log would show below information. How to get error log location? I have already blogged about it. Hope you have learned something new. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Typing commands into a terminal always returns "-bash: /usr/bin/python: is a directory"

    - by Artur Sapek
    I think I messed something up on my Ubuntu server while trying to upgrade to Python 2.7.2. Every time I type in a command that doesn't have a response, the default from bash is this: -bash: /usr/bin/python: is a directory Just like it would say if I typed the name of a directory. But this happens every time I enter a command that doesn't do anything. artur@SERVER:~$ dslkfjdsklfdshjk -bash: /usr/bin/python: is a directory I remember messing with the update-alternatives to point at python at some point, perhaps that could be it? Any inklings as to why this is happening? Related to this problem is also the fact that when I try using easy_install it tells me -bash: /usr/bin/easy_install: /usr/bin/python: bad interpeter: Permission denied /etc/fstab/ is set to exec. I've read that could fix the second problem but it hasn't.

    Read the article

  • I am trying to build libmtp 1.1.14 but I cannot fix this error

    - by Kristoffer
    I have run this in a terminal. git clone git://libmtp.git.sourceforge.net/gitroot/libmtp/libmtp cd libmtp ./autogen.sh (answering yes to all questions) But when I try to run the ./configure --prefix=/usr/ I get this error: checking whether to build static libraries... yes ./configure: line 11739: AC_LIB_PREPARE_PREFIX: command not found ./configure: line 11740: AC_LIB_RPATH: command not found ./configure: line 11745: syntax error near unexpected token `iconv' ./configure: line 11745: ` AC_LIB_LINKFLAGS_BODY(iconv)' I have built and installed the libiconv from here. I do not know what to do, been trying for a few hours but I am pretty noob to Linux. How can i fix this? The lines 11739 to 11745 in the configure file looks like this: AC_LIB_PREPARE_PREFIX AC_LIB_RPATH AC_LIB_LINKFLAGS_BODY(iconv)

    Read the article

  • GNU Smalltalk package

    - by Peter
    I've installed the GNU Smalltalk package and can get to the SmallTalk command line with the command 'gst'. However, I can't start the visual gst browser using the command: $ gst-browser When I try, this is what I get: peter@peredur:~$ gst-browser Object: CFunctionDescriptor new: 1 "<0x40488720>" error: Invalid C call-out gdk_colormap_get_type SystemExceptions.CInterfaceError(Smalltalk.Exception)>>signal (ExcHandling.st:254) SystemExceptions.CInterfaceError class(Smalltalk.Exception class)>>signal: (ExcHandling.st:161) Smalltalk.CFunctionDescriptor(Smalltalk.CCallable)>>callInto: (CCallable.st:165) GdkColormap class>>getType (GTK.star#VFS.ZipFile/Funcs.st:1) optimized [] in GLib class>>registerAllTypes (GTK.star#VFS.ZipFile/GtkDecl.st:78) Smalltalk.OrderedCollection>>do: (OrderColl.st:68) GLib class>>registerAllTypes (GTK.star#VFS.ZipFile/GtkDecl.st:78) Smalltalk.UndefinedObject>>executeStatements (GTK.star#VFS.ZipFile/GtkImpl.st:1078) Object: CFunctionDescriptor new: 1 "<0x404a7c28>" error: Invalid C call-out gtk_window_new SystemExceptions.CInterfaceError(Exception)>>signal (ExcHandling.st:254) SystemExceptions.CInterfaceError class(Exception class)>>signal: (ExcHandling.st:161) CFunctionDescriptor(CCallable)>>callInto: (CCallable.st:165) GTK.GtkWindow class>>new: (GTK.star#VFS.ZipFile/Funcs.st:1) VisualGST.GtkDebugger(VisualGST.GtkMainWindow)>>initialize (VisualGST.star#VFS.ZipFile/GtkMainWindow.st:131) VisualGST.GtkDebugger class(VisualGST.GtkMainWindow class)>>openSized: (VisualGST.star#VFS.ZipFile/GtkMainWindow.st:19) [] in VisualGST.GtkDebugger class>>open: (VisualGST.star#VFS.ZipFile/Debugger/GtkDebugger.st:16) [] in BlockClosure>>forkDebugger (DebugTools.star#VFS.ZipFile/DebugTools.st:380) [] in Process>>onBlock:at:suspend: (Process.st:392) BlockClosure>>on:do: (BlkClosure.st:193) [] in Process>>onBlock:at:suspend: (Process.st:393) BlockClosure>>ensure: (BlkClosure.st:269) [] in Process>>onBlock:at:suspend: (Process.st:370) [] in BlockClosure>>asContext: (BlkClosure.st:179) BlockContext class>>fromClosure:parent: (BlkContext.st:68) Everything hangs at this point until I hit ^C, after which, I get: Object: CFunctionDescriptor new: 1 "<0x404a7c28>" error: Invalid C call-out gtk_window_new SystemExceptions.CInterfaceError(Exception)>>signal (ExcHandling.st:254) SystemExceptions.CInterfaceError class(Exception class)>>signal: (ExcHandling.st:161) CFunctionDescriptor(CCallable)>>callInto: (CCallable.st:165) GTK.GtkWindow class>>new: (GTK.star#VFS.ZipFile/Funcs.st:1) VisualGST.GtkDebugger(VisualGST.GtkMainWindow)>>initialize (VisualGST.star#VFS.ZipFile/GtkMainWindow.st:131) VisualGST.GtkDebugger class(VisualGST.GtkMainWindow class)>>openSized: (VisualGST.star#VFS.ZipFile/GtkMainWindow.st:19) [] in VisualGST.GtkDebugger class>>open: (VisualGST.star#VFS.ZipFile/Debugger/GtkDebugger.st:16) [] in BlockClosure>>forkDebugger (DebugTools.star#VFS.ZipFile/DebugTools.st:380) [] in Process>>onBlock:at:suspend: (Process.st:392) BlockClosure>>on:do: (BlkClosure.st:193) [] in Process>>onBlock:at:suspend: (Process.st:393) BlockClosure>>ensure: (BlkClosure.st:269) [] in Process>>onBlock:at:suspend: (Process.st:370) [] in BlockClosure>>asContext: (BlkClosure.st:179) BlockContext class>>fromClosure:parent: (BlkContext.st:68) peter@peredur:~$ Is there a problem with this package?

    Read the article

  • What 'ordinary' packages does 'sudo apt-get purge wine*' removes?

    - by an_ant
    Ok, here's one silly question - I've actually run that command trying to lose configuraion of current installation of wine, so I can install it from zero. I didn't quite read but I've noticed that some packages that 'shouldn't be uninstalled' were getting uninstalled, like compiz for example... Now, my problem is that I don't know what are the other packages that got uninstalled. I can't enter Ubuntu at all. Help, please. Command was sudo apt-get purge wine*. Thank you. Please be nice; I was really tired :)

    Read the article

  • Unable to sign in. How to debug?

    - by Dmitriy Budnik
    I had to reboot system with reset button. After reboot I can't sign in. When I enter my password It seems like X-server just restarts. I can sing in as guest and also I can sign in in text TTY. Here is first 150 lines of my lightdm.log: [+0.04s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.04s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=1070 [+0.04s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.04s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.04s] DEBUG: Registered seat module xlocal [+0.04s] DEBUG: Registered seat module xremote [+0.04s] DEBUG: Adding default seat [+0.04s] DEBUG: Starting seat [+0.04s] DEBUG: Starting new display for automatic login as user dmytro [+0.04s] DEBUG: Starting local X display [+3.64s] DEBUG: X server :0 will replace Plymouth [+3.66s] DEBUG: Using VT 7 [+3.66s] DEBUG: Activating VT 7 [+3.66s] DEBUG: Logging to /var/log/lightdm/x-0.log [+3.66s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+3.66s] DEBUG: Launching X Server [+3.66s] DEBUG: Launching process 1154: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none [+3.66s] DEBUG: Waiting for ready signal from X server :0 [+3.66s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+3.66s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+10.78s] DEBUG: Got signal 10 from process 1154 [+10.78s] DEBUG: Got signal from X server :0 [+10.78s] DEBUG: Stopping Plymouth, X server is ready [+10.80s] DEBUG: Connecting to XServer :0 [+10.80s] DEBUG: Automatically logging in user dmytro [+10.80s] DEBUG: Started session 1303 with service 'lightdm-autologin', username 'dmytro' [+13.22s] DEBUG: Session 1303 authentication complete with return value 0: Success [+13.26s] DEBUG: Autologin user dmytro authorized [+13.27s] DEBUG: Autologin using session ubuntu [+14.44s] DEBUG: Dropping privileges to uid 1000 [+14.48s] DEBUG: Restoring privileges [+14.49s] DEBUG: Dropping privileges to uid 1000 [+14.49s] DEBUG: Writing /home/dmytro/.dmrc [+14.61s] DEBUG: Restoring privileges [+14.81s] DEBUG: Starting session ubuntu as user dmytro [+14.81s] DEBUG: Session 1303 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+15.76s] DEBUG: New display ready, switching to it [+15.76s] DEBUG: Activating VT 7 [+15.76s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session0 [+16.63s] DEBUG: Session 1303 exited with return value 0 [+16.63s] DEBUG: User session quit [+16.63s] DEBUG: Stopping display [+16.63s] DEBUG: Sending signal 15 to process 1154 [+17.19s] DEBUG: Process 1154 exited with return value 0 [+17.19s] DEBUG: X server stopped [+17.19s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+17.19s] DEBUG: Releasing VT 7 [+17.19s] DEBUG: Display server stopped [+17.19s] DEBUG: Display stopped [+17.19s] DEBUG: Active display stopped, switching to greeter [+17.19s] DEBUG: Switching to greeter [+17.19s] DEBUG: Starting new display for greeter [+17.19s] DEBUG: Starting local X display [+17.19s] DEBUG: Using VT 7 [+17.19s] DEBUG: Logging to /var/log/lightdm/x-0.log [+17.19s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+17.19s] DEBUG: Launching X Server [+17.19s] DEBUG: Launching process 1563: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+17.19s] DEBUG: Waiting for ready signal from X server :0 [+17.48s] DEBUG: Got signal 10 from process 1563 [+17.48s] DEBUG: Got signal from X server :0 [+17.48s] DEBUG: Connecting to XServer :0 [+17.48s] DEBUG: Starting greeter [+17.48s] DEBUG: Started session 1575 with service 'lightdm', username 'lightdm' [+17.61s] DEBUG: Session 1575 authentication complete with return value 0: Success [+17.61s] DEBUG: Greeter authorized [+17.61s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+17.68s] DEBUG: Session 1575 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+20.86s] DEBUG: Greeter connected version=1.2.1 [+20.86s] DEBUG: Greeter connected, display is ready [+20.86s] DEBUG: New display ready, switching to it [+20.86s] DEBUG: Activating VT 7 [+20.86s] DEBUG: Stopping greeter display being switched from [+24.90s] DEBUG: Greeter start authentication for dmytro [+24.90s] DEBUG: Started session 1746 with service 'lightdm', username 'dmytro' [+25.10s] DEBUG: Session 1746 got 1 message(s) from PAM [+25.10s] DEBUG: Prompt greeter with 1 message(s) [+31.87s] DEBUG: Continue authentication [+33.75s] DEBUG: Session 1746 authentication complete with return value 7: Authentication failure [+33.75s] DEBUG: Authenticate result for user dmytro: Authentication failure [+33.75s] DEBUG: Greeter start authentication for dmytro [+33.75s] DEBUG: Session 1746: Sending SIGTERM [+33.75s] DEBUG: Started session 2264 with service 'lightdm', username 'dmytro' [+33.75s] DEBUG: Session 2264 got 1 message(s) from PAM [+33.75s] DEBUG: Prompt greeter with 1 message(s) [+36.41s] DEBUG: Continue authentication [+36.53s] DEBUG: Session 2264 authentication complete with return value 0: Success [+36.53s] DEBUG: Authenticate result for user dmytro: Success [+36.54s] DEBUG: User dmytro authorized [+36.54s] DEBUG: Greeter requests session ubuntu [+36.54s] DEBUG: Using session ubuntu [+36.54s] DEBUG: Stopping greeter [+36.54s] DEBUG: Session 1575: Sending SIGTERM [+37.41s] DEBUG: Greeter closed communication channel [+37.41s] DEBUG: Session 1575 exited with return value 0 [+37.41s] DEBUG: Greeter quit [+37.42s] DEBUG: Dropping privileges to uid 1000 [+37.42s] DEBUG: Restoring privileges [+37.43s] DEBUG: Dropping privileges to uid 1000 [+37.43s] DEBUG: Writing /home/dmytro/.dmrc [+38.35s] DEBUG: Restoring privileges [+40.37s] DEBUG: Starting session ubuntu as user dmytro [+40.37s] DEBUG: Session 2264 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+40.39s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session1 [+50.78s] DEBUG: Session 2264 exited with return value 0 [+50.78s] DEBUG: User session quit [+50.78s] DEBUG: Stopping display [+50.78s] DEBUG: Sending signal 15 to process 1563 [+51.53s] DEBUG: Process 1563 exited with return value 0 [+51.53s] DEBUG: X server stopped [+51.53s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+51.53s] DEBUG: Releasing VT 7 [+51.53s] DEBUG: Display server stopped [+51.53s] DEBUG: Display stopped [+51.53s] DEBUG: Active display stopped, switching to greeter [+51.53s] DEBUG: Switching to greeter [+51.53s] DEBUG: Starting new display for greeter [+51.53s] DEBUG: Starting local X display [+51.53s] DEBUG: Using VT 7 [+51.53s] DEBUG: Logging to /var/log/lightdm/x-0.log [+51.53s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+51.53s] DEBUG: Launching X Server [+51.53s] DEBUG: Launching process 2894: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+51.53s] DEBUG: Waiting for ready signal from X server :0 [+51.75s] DEBUG: Got signal 10 from process 2894 [+51.75s] DEBUG: Got signal from X server :0 [+51.75s] DEBUG: Connecting to XServer :0 [+51.75s] DEBUG: Starting greeter [+51.75s] DEBUG: Started session 2898 with service 'lightdm', username 'lightdm' [+51.76s] DEBUG: Session 2898 authentication complete with return value 0: Success [+51.76s] DEBUG: Greeter authorized [+51.76s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+51.76s] DEBUG: Session 2898 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+53.26s] DEBUG: Greeter connected version=1.2.1 [+53.26s] DEBUG: Greeter connected, display is ready [+53.26s] DEBUG: New display ready, switching to it [+53.26s] DEBUG: Activating VT 7 [+53.26s] DEBUG: Stopping greeter display being switched from [+54.17s] DEBUG: Greeter start authentication for dmytro [+54.17s] DEBUG: Started session 3152 with service 'lightdm', username 'dmytro' [+54.18s] DEBUG: Session 3152 got 1 message(s) from PAM [+54.18s] DEBUG: Prompt greeter with 1 message(s) [+58.61s] DEBUG: Continue authentication [+58.65s] DEBUG: Session 3152 authentication complete with return value 0: Success [+58.65s] DEBUG: Authenticate result for user dmytro: Success [+58.66s] DEBUG: User dmytro authorized [+58.66s] DEBUG: Greeter requests session ubuntu [+58.66s] DEBUG: Using session ubuntu [+58.66s] DEBUG: Stopping greeter [+58.66s] DEBUG: Session 2898: Sending SIGTERM How can I fix it? What other .log files could possibly give me a clue? Update: Possibly it's duplicate of Desktop login fails, terminal works

    Read the article

  • database api commands

    - by Rahul Mehta
    As I am developing database api for a project. I am developing commands for getting data from database. e.g. i have one gib table so command for that is getgib name alias limit fields if user pass the name e.g. getgib rahul than it will return all the gib data whose name is like rahul. if alias is given than it will return the all the gib owned by the user whose alias(userid) given . So i want to design the commands. limit : is to limit the record in query, fields : is the extra fields i want to add in the select query . so as now commands are set but now Question 1 : i want the gibs by the gibid , so how to make this or any suggestion to improve my command is welcome. Question 2 : if user don't want to specify the name , and he want only the gibs by providing alias then at this what separator at the place of name i should used.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Winetricks fails to find program files directory

    - by EgyptLovesUbuntu
    I installed a fresh copy of Ubuntu 12 desktop then: Installed WINE from the Ubuntu Software Center. Installed WineTricks from the Ubuntu Software Center. When I type the following commands in the terminal: sudo winetricks dotnet40 I get this error message: wine cmd.exe /c echo '%ProgramFiles%' returned empty string If i try the command without sudo winetricks dotnet40 The output is as follows Executing w_do_call dotnet40 Executing load_dotnet40 ------------------------------------------------------ dotnet40 does not yet fully work or install on wine. Caveat emptor. ------------------------------------------------------ Executing mkdir -p /home/vectoruser/.cache/winetricks/dotnet40 mkdir: cannot create directory `/home/vectoruser/.cache/winetricks/dotnet40': Permission denied ------------------------------------------------------ Note: command 'mkdir -p /home/vectoruser/.cache/winetricks/dotnet40' returned status 1. Aborting. ------------------------------------------------------ My current user is vectoruser which i use to logon to Ubuntu The output of ls -ld /home/vectoruser /home/vectoruser/.cache /home/vectoruser/.cache/winetricks Gives: drwxr-xr-x 32 vectoruser vectoruser 4096 Aug 2 19:26 /home/vectoruser drwx------ 19 vectoruser vectoruser 4096 Aug 2 19:25 /home/vectoruser/.cache drwxr-xr-x 2 root root 4096 Aug 2 18:09 /home/vectoruser/.cache/winetricks

    Read the article

  • Can't log to mySQL

    - by Reza
    Even after I reset root password with following command I can not log to mySQL: (other command listed to provide additional info) # sudo dpkg-reconfigure mysql-server-5.1 # mysql -u root -p Enter password: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) # telnet 127.0.0.1 3306 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused # ps -Aw |grep mysql 26522 ? 00:00:00 mysqld # /etc/init.d/mysql start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service mysql start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start mysql update: # sudo mysqladmin -u root password 123 mysqladmin: connect to server at 'localhost' failed it seems mysql is not ruining properly

    Read the article

  • How does landscape calculate memory usage?

    - by David Planella
    I'm trying to debug an OOM situation in an Ubuntu 12.04 server, and looking at the Memory graphs in Landscape, I noticed that there wasn't any serious memory usage spike. Then I looked at the output of the free command and I wasn't quite sure how both memory usage results relate to each other. Here's landscape's output on the server: $ landscape-sysinfo System load: 0.0 Processes: 93 Usage of /: 5.6% of 19.48GB Users logged in: 1 Memory usage: 26% IP address for eth0: - Swap usage: 2% Then I ran the free command and I get: $ free -m total used free shared buffers cached Mem: 486 381 105 0 4 165 -/+ buffers/cache: 212 274 Swap: 255 7 248 I can understand the 2% swap usage, but where does the 26% memory usage come from?

    Read the article

  • How to repair an external harddrive?

    - by dodohjk
    I would like to reformat my hard disk, and if possible recover the (somewhat unimportant) contents if possible. I have a Western Digital 1TB hard drive which had a NTFS partition. I unplugged the drive without safely removing it first. At first a pop up was asking me to use a Windows OS to run the chkdsk /f command, however, in the effort to keep using a Linux OS I used the ntfsfix command on the ubuntu terminal Now, when I try to access the hard drive, it doesn't show up anymore in Nautilus. I tried reformatting it using Disk Utility, but it gives me an error message, and Gparted would hang on the "Scanning devices" step infinitely. Please comment any output that you would like to see and I will add it to my question. EDIT disk utility tells me is on /dev/sdb the command sudo fdisk -l gives dodohjk@DodosPC:~$ sudo fdisk -l [sudo] password for dodohjk: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006fa8c Device Boot Start End Blocks Id System /dev/sda1 * 4094 482344959 241170433 5 Extended /dev/sda2 482344960 488396799 3025920 82 Linux swap / Solaris /dev/sda5 4096 31461127 15728516 83 Linux /dev/sda6 31463424 52434943 10485760 83 Linux /dev/sda7 52436992 62923320 5243164+ 83 Linux /dev/sda8 62924800 482344959 209710080 83 Linux Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6e697373 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 1936269394 3772285809 918008208 4f QNX4.x 3rd part /dev/sdb2 ? 1917848077 2462285169 272218546+ 73 Unknown /dev/sdb3 ? 1818575915 2362751050 272087568 2b Unknown /dev/sdb4 ? 2844524554 2844579527 27487 61 SpeedStor Partition table entries are not in disk order I wrote something wrong here, however here the output of fsck /dev/sbd is dodohjk@DodosPC:~$ sudo fsck /dev/sdb fsck from util-linux 2.20.1 e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device&gt;

    Read the article

  • Using commands with ApplicationBarMenuItem and ApplicationBarButton in Windows Phone 7

    - by Laurent Bugnion
    Unfortunately, in the current version of the Windows Phone 7 Silverlight framework, it is not possible to attach any command on the ApplicationBarMenuItem and ApplicationBarButton controls. These two controls appear in the Application Bar, for example with the following markup: <phoneNavigation:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar x:Name="MainPageApplicationBar"> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem Text="Add City" /> <shell:ApplicationBarMenuItem Text="Add Country" /> </shell:ApplicationBar.MenuItems> <shell:ApplicationBar.Buttons> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.feature.video.rest.png" /> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.feature.settings.rest.png" /> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.refresh.rest.png" /> </shell:ApplicationBar.Buttons> </shell:ApplicationBar> </phoneNavigation:PhoneApplicationPage.ApplicationBar> This code will create the following UI: Application bar, collapsed Application bar, expanded ApplicationBarItems are not, however, controls. A quick look in MSDN shows the following hierarchy for ApplicationBarMenuItem, for example: Unfortunately, this prevents all the mechanisms that are normally used to attach a Command (for example a RelayCommand) to a control. For example, the attached behavior present in the class ButtonBaseExtension (from the Silverlight 3 version of the MVVM Light toolkit) can only be attached to a DependencyObject. Similarly, Blend behaviors (such as EventToCommand from the toolkit’s Extras library) needs a FrameworkElement to work. Using code behind The alternative is to use code behind. As I said in my MIX10 talk, the MVVM police will not take your family away if you use code behind (this quote was actually suggested to me by Glenn Block); the code behind is there for a reason. In our case, invoking a command in the ViewModel requires the following code: In MainPage.xaml: <shell:ApplicationBarMenuItem Text="My Menu 1" Click="ApplicationBarMenuItemClick"/> In MainPage.xaml.cs private void ApplicationBarMenuItemClick( object sender, System.EventArgs e) { var vm = DataContext as MainViewModel; if (vm != null) { vm.MyCommand.Execute(null); } } Conclusion Resorting to code behind to bridge the gap between the View and the ViewModel is less elegant than using attached behaviors, either through an attached property or through a Blend behavior. It does, however, work fine. I don’t have any information if future changes in the Windows Phone 7 Application Bar API will make this easier. In the mean time, I would recommend using code behind instead.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How to display reboot required user notification after installing a custom package in linux?

    - by user284588
    After installing a custom package I should force a reboot of the system. I looked at couple of solutions to this use notify-send to display user notification followed by a reboot command, which did work as planned. But the user notification is only shown when I install the package from command line and not when I installed through Software Center. I came across some posts where they suggested adding the following to the postinst script [ -x /usr/share/update-notifier/notify-reboot-required ] && \ /usr/share/update-notifier/notify-reboot-required || true Tried including the above in the postinst script but all it does is updating the two files /var/run/reboot-required.pkgs and /var/run/reboot-required with restart information. It neither displayed user-notification nor rebooted the system after package is installed. Is there a way to display reboot required user notification in Ubuntu/Fedora/Open SUSE ?

    Read the article

  • Cleaning up a folder structure from Visual Studio artifacts from the shell

    A few years ago I wrote a post that showed how to write a NAnt script to clean a folder structure from the artifacts folders used by Visual Studio. Today what I wanted to show you is a way that doesnt require NAnt installed on your computer, but that uses just a very simple command for Windows shell. Actually, its just a very tiny variation of the same command that Jon Galloway wrote to clean a folder structure from SVN files. But without further ado, here it is: Windows Registry Editor...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The illusion of Competence

    - by tony_lombardo
    Working as a contractor opened my eyes to the developer food chain.  Even though I had similar experiences earlier in my career, the challenges seemed much more vivid this time through.  I thought I’d share a couple of experiences with you, and the lessons that can be taken from them. Lesson 1: Beware of the “funnel” guy.  The funnel guy is the one who wants you to funnel all thoughts, ideas and code changes through him.  He may say it’s because he wants to avoid conflicts in source control, but the real reason is likely that he wants to hide your contributions.  Here’s an example.  When I finally got access to the code on one of my projects, I was told by the developer that I had to funnel all of my changes through him.  There were 4 of us coding on the project, but only 2 of us working on the UI.  The other 2 were working on a separate application, but part of the overall project.  So I figured, I’ll check it into SVN, he reviews and accepts then merges in.  Not even close.  I didn’t even have checkin rights to SVN, I had to email my changes to the developer so he could check those changes in.  Lesson 2: If you point out flaws in code to someone supposedly ‘higher’ than you in the developer chain, they’re going to get defensive.  My first task on this project was to review the code, familiarize myself with it.  So of course, that’s what I did.  And in familiarizing myself with it, I saw so many bad practices and code smells that I immediately started coming up with solutions to fix it.  Of course, when I reviewed these changes with the developer (guy who originally wrote the code), he smiled and nodded and said, we can’t make those changes now, it’s too destabilizing.  I recommended we create a new branch and start working on refactoring, but branching was a new concept for this guy and he was worried we would somehow break SVN. How about some concrete examples? I started out by recommending we remove NUnit dependency and tests from the application project, and create a separate Unit testing project.  This was met with a little bit of resistance because - “How do I access the private methods?”  As it turned out there weren’t really any private methods that weren’t exposed by public methods, so I quickly calmed this fear. Win 1 Loss 0 Next, I recommended that all of the File IO access be wrapped in Using clauses, or at least properly wrapped in try catch finally.  This recommendation was accepted.. but never implemented. Win 2  Loss 1 Next recommendation was to refactor the command pattern implementation.  The command pattern was implemented, but it wasn’t really necessary for the application.  More over, the fact that we had 100 different command classes, each with it’s own specific command parameters class, made maintenance a huge hassle.  The same code repeated over and over and over.  This recommendation was declined, the code was too fragile and this change would destabilize it.  I couldn’t disagree, though it was the commands themselves in many cases that were fragile. Win 2 Loss 2 Next recommendation was to aid performance (and responsiveness) of the application by using asynchronous service calls.  This on was accepted. Win 2 Loss 3 If you’re paying any attention, you’re wondering why the async service calls was scored as a loss.. Let me explain.  The service call was made using the async pattern.  Followed by a thread.sleep  <facepalm>. Now it’s easy to be harsh on this kind of code, especially if you’re an experienced developer.  But I understood how most of this happened.  One junior guy, working as hard as he can to build his first real world application, with little or no guidance from anyone else.  He had his pattern book and theory of programming to help him, but no real world experience.  He didn’t know how difficult it would be to trace the crashes to the coding issues above, but he will one day.  The part that amazed me was the management position that “this guy should be a team lead, because he’s worked so hard”.  I’m all for rewarding hard work, but when you reward someone by promoting them past the point of their competence, you’re setting yourself and them up for failure.  And that’s lesson 3.  Just because you’ve got a hard worker, doesn’t mean he should be leading a development project.  If you’re a junior guy busting your ass, keep at it.  I encourage you to try new things, but most importantly to learn from your mistakes.  And correct your mistakes.  And if someone else looks at your code and shows you a laundry list of things that should be done differently, don’t take it personally – they’re really trying to help you.  And if you’re a senior guy, working with a junior guy, it’s your duty to point out the flaws in the code.  Even if it does make you the bad guy.  And while I’ve used “guy” above, I mean both men and women.  And in some cases mutant dinosaurs. 

    Read the article

  • Problem libc-so-6-version-glibc-2-14-not-found

    - by alessio
    i have installed nagios core 4 on ubuntu server 12.04 lts.. everything working fine,, but... i have a problem with remote command to a remote linux (ubuntu server 12.04) pc! when i try to check a service, for example: check_swap, check_disk etc.. i got everytime an error: Remote command execution failed: /home/nagios/plugins/check_disk: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by /home/nagios/plugins/check_disk) the remote pc is not my pc and i don't want to make a disaster! :) So... how can i fix this problem? any help be appreciate!!! ;) thanks in advance! :)

    Read the article

  • Problem opening SFX archive file(.exe) using the archive manager

    - by Cody
    I have installed both rar and unrar using apt-install but I am still not able to use archive manager for opening the archive file.. I have also tried installing p7zip(p7zip-full and p7zip) but no improvements... However, when I use command-line for extracting the files from the archive using unrar or rar the command executes successfully... Is there any other open source software I should install for viewing the contents of the SFX archive or what else should I install to view the same in the archive manager.. Thanks in advance...

    Read the article

  • SQLCMD Mode: give it one more chance

    - by Maria Zakourdaev
      - Click on me. Choose me. - asked one forgotten feature when some bored DBA was purposelessly wondering through the Management Studio menu at the end of her long and busy working day. - Why would I use you? I have heard of no one who does. What are you for? - perplexedly wondered aged and wise DBA. At least that DBA thought she was aged and wise though each day tried to prove to her that she wasn't. - I know you. You are quite lazy. Why would you do additional clicks to move from window to window? From Tool to tool ? This is irritating, isn't it? I can run windows system commands, sql statements and much more from the same script, from the same query window! - I have all my tools that I‘m used to, I have Management Studio, Cmd, Powershell. They can do anything for me. I don’t need additional tools. - I promise you, you will like me. – the thing continued to whine . - All right, show me. – she gave up. It’s always this way, she thought sadly, - easier to agree than to explain why you don’t want. - Enable me and then think about anything that you always couldn’t do through the management studio and had to use other tools. - Ok. Google for me the list of greatest features of SQL SERVER 2012. - Well... I’m not sure... Think about something else. - Ok, here is something easy for you. I want to check if file folder exists or if file is there. Though, I can easily do this using xp_cmdshell … - This is easy for me. – rejoiced the feature. By the way, having the items of the menu talking to you usually means you should stop working and go home. Or drink coffee. Or both. Well, aged and wise dba wasn’t thinking about the weirdness of the situation at that moment. - After enabling me, – said unfairly forgotten feature (it was thinking of itself in such manner) – after enabling me you can use all command line commands in the same management studio query window by adding two exclamation marks !! at the beginning of the script line to denote that you want to use cmd command: -Just keep in mind that when using this feature, you are actually running the commands ON YOUR computer and not on SQL server that query window is connected to. This is main difference from using xp_cmdshell which is executing commands on sql server itself. Bottomline, use UNC path instead of local path. - Look, there are much more than that. - The SQLCMD feature was getting exited.- You can get IP of your servers, create, rename and drop folders. You can see the contents of any file anywhere and even start different tools from the same query window: Not so aged and wise DBA was getting interested: - I also want to run different scripts on different servers without changing connection of the query window. - Sure, sure! Another great feature that CMDmode is providing us with and giving more power to querying. Use “:” to use additional features, like :connect that allows you to change connection: - Now imagine, you have one script where you have all your changes, like creating staging table on the DWH staging server, adding fact table to DWH itself and updating stored procedures in the server where reporting database is located. - Now, give me more challenges! - Script out a list of stored procedures into the text files. - You can do it easily by using command :out which will write the query results into the specified text file. The output can be the code of the stored procedure or any data. Actually this is the same as changing the query output into the file instead of the grid. - Now, take all of the scripts and run all of them, one by one, on the different server.  - Easily - Come on... I’m sure that you can not... -Why not? Naturally, I can do it using :r commant which is opening a script and executing it. Look, I can also use :setvar command to define an environment variable in SQLCMD mode. Just note that you have to leave the empty string between :r commands, otherwise it’s not working although I have no idea why. - Wow.- She was really impressed. - Ok, I’ll go to try all those… -Wait, wait! I know how to google the SQL SERVER features for you! This example will open chrome explorer with search results for the “SQL server 2012 top features” ( change the path to suit your PC): “Well, this can be probably useful stuff, maybe this feature is really unfairly forgotten”, thought the DBA while going through the dark empty parking lot to her lonely car. “As someone really wise once said: “It is what we think we know that keeps us from learning. Learn, unlearn and relearn”.

    Read the article

  • How should I design my database API commands? [closed]

    - by WebDev
    I am developing a database API for a project, with commands for getting data from the database. For example, I have one gib table, so the command for that is: getgib name alias limit fields If the user pass their name: getgib rahul Then it will return all gib data whose name is like rahul. If an alias is given then it will return all the gib owned by the user whose alias (userid) was given. I want to design the commands: limit: to limit the record in query, fields: extra fields I want to add in the select query. So now the commands are set, but: I want the gibs by the gibid, so how to make this or any suggestion to improve my command is welcome. If the user doesn't want to specify the name, and he wants only the gibs by providing alias, then what separator should I use instead of name?

    Read the article

  • How to escape this in the bash script?

    - by allenskd
    I'm trying to complete a batch of 3 videos to leave it there till morning processing but it seems there are special characters in it... I try it "raw" in the terminal and it works but in bash script it stops working Example: args1="-r 29.97 -t 00:13:30 -vsync 0 -vpre libx264-medium -i" args12="-r 29.97 -ss 00:40:30 -vsync 0 -vpre libx264-medium -i" args2="[in] scale=580:380 [T1],[T1] pad=720:530:0:50 (other arguments with lots of [ and ]" In the output it says Unable to find a suitable output format for 'scale=580:380' not sure why... like I said, the command runs fine in the command-line, just not in the script /usr/local/bin/ffmpeg "$args1" "${file}" -vf "$args2" "$args3" "${args[0]}_${startingfrom}_0001_02.mp4"

    Read the article

  • Munin email notification

    - by Prashanth
    I am trying to get munin to notify me via email. I have configured munin and it reports critical and warning values but no alerts are being sent neither is any script being called. Can you please help me out with this? I have included part of the munin.conf below # Drop [email protected] and [email protected] an email everytime # something changes (OK -> WARNING, CRITICAL -> OK, etc) #contact.someuser.command mail -s "Munin notification" [email protected]\ contact.prashanth.command echo "Munin notification" | sendmail -t [email protected] contact.prashanth.always_send warning critical contact.root.command echo "Munin notification" | sendmail -t [email protected] contact.root.always_send warning critical contact.pipevia.command | /home/prashanth/script.sh /home/prashanth/script.sh None of this works. What am i missing here and why are emails not being sent? Thanks in advance. here is the munin-limits.log 2011/09/26 14:58:12 Opened log file 2011/09/26 14:58:12 [INFO] Starting munin-limits, getting lock /var/run/munin/munin-limits.lock 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 722. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 725. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 740. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 754. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 759. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $text in length at /usr/share/perl5/Munin/Master/LimitsOld.pm line 774. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $res[3] in join or string at /usr/share/perl5/Munin/Master/LimitsOld.pm line 777. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 722. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 725. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 740. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 754. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 759. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $text in length at /usr/share/perl5/Munin/Master/LimitsOld.pm line 774. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $res[15] in join or string at /usr/share/perl5/Munin/Master/LimitsOld.pm line 777. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 722. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 725. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 740. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 754. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 759. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $text in length at /usr/share/perl5/Munin/Master/LimitsOld.pm line 774. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $res[1] in join or string at /usr/share/perl5/Munin/Master/LimitsOld.pm line 777. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 722. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 725. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 740. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 754. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $a[0] in pattern match (m//) at /usr/share/perl5/Munin/Master/LimitsOld.pm line 759. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $text in length at /usr/share/perl5/Munin/Master/LimitsOld.pm line 774. 2011/09/26 14:58:12 [PERL WARNING] Use of uninitialized value $res[1] in join or string at /usr/share/perl5/Munin/Master/LimitsOld.pm line 777. 2011/09/26 14:58:12 Baz? 2011/09/26 14:58:12 [INFO] munin-limits finished (0.02 sec) 2011/09/26 14:58:12 Command "prashanth" stderr: Munin notification - this is a test mail from the user prashanth | sendmail -t [email protected]

    Read the article

  • How does landscape calculate free memory?

    - by David Planella
    I'm trying to debug an OOM situation in an Ubuntu 12.04 server, and looking at the Memory graphs in Landscape, I noticed that there wasn't any serious memory usage spike spike. Then I looked at the output of the free command and I wasn't quite sure how both memory usage results relate to each other. Here's landscape's output on the server: $ landscape-sysinfo System load: 0.0 Processes: 93 Usage of /: 5.6% of 19.48GB Users logged in: 1 Memory usage: 26% IP address for eth0: - Swap usage: 2% Then I run the free command and I get: $ free -m total used free shared buffers cached Mem: 486 381 105 0 4 165 -/+ buffers/cache: 212 274 Swap: 255 7 248 I can understand the 2% swap usage, but where does the 26% memory usage come from?

    Read the article

  • Linux Mint: How do I autorun rdp script?

    - by Rommel
    HI to all my name is Rommel... I'm new to Linux system coz im more into windows, but now i wanna try Linux os. I have this Linux Mint and i installed it on to one of my desktop PC, i have downloaded and installed freerdp-x11 for me to connect into my windows terminal server...The thing that I really need help is, i want the terminal command line to connect automatically to the windows terminal server so that every time i boot my Linux Mint pc i wouldn't have to keep typing "xfreerdp 000.000.0.000" on the terminal command line...Is there a script for it..???? PLease guys i really need your help on this... You can email me at this address: [email protected] or [email protected] Thanks in advance.

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >