Search Results

Search found 35444 results on 1418 pages for 'lock down computer'.

Page 191/1418 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • How to imitate focus on a div element?

    - by Alexis Abril
    The scenario involves imitating a drop down list. Instead of the standard input select element, we're using a set of custom images in combination with a few CSS styles and jQuery behaviors. All in all, we have a drop down list that works just fine. The question revolves around a drop down list losing focus. A normal select element losing focus would automatically close the list, however, since we are using a set of divs, it's not possibly to set or lose focus(and subsequently hide the list). We've been able to imitate normal drop down behavior by hiding a text input behind the div and setting the focus to it dynamically. For example: User clicks drop down list - content list is displayed and hidden text input is set as focus. Event is bound to the text input losing focus which hides the content list. I'm curious if the community has a better approach to hiding a div when a user clicks off, presses escape, etc.

    Read the article

  • What is your best programmer joke?

    - by hmason
    When I teach introductory computer science courses, I like to lighten the mood with some humor. Having a sense of fun about the material makes it less frustrating and more memorable, and it's even motivating if the joke requires some technical understanding to 'get it'! I'll start off with a couple of my favorites: Q: How do you tell an introverted computer scientist from an extroverted computer scientist? A: An extroverted computer scientist looks at your shoes when he talks to you. And the classic: Q: Why do programmers always mix up Halloween and Christmas? A: Because Oct 31 == Dec 25! I'm always looking for more of these, and I can't think of a better group of people to ask. What are your best programmer/computer science/programming jokes?

    Read the article

  • Last in First out UDP structure in MatLab.

    - by D Zondervan
    I am using MatLabs UDP function in their instrument control toolbox to send data packets from one computer to another. The first computer is constantly updating data values and sending them to the other computer, and I want that computer to be able to query the first one for the most recent values whenever it needs them. However, the default implementation of the UDP send and receive in MatLab is a FIFO structure- the first packet I send is the first the other computer receives when they execute the "fscanf" function. I want the last packet I sent to be the one the fscanf function returns. Is this possible or do I need to use a different protocol?

    Read the article

  • The Tab1.java from API Demo has exception.

    - by Kooper
    I don't know why.All my Tab programs have exception.Even from API Demo. Here is the code: package com.example.android.apis.view; import android.app.TabActivity; import android.os.Bundle; import android.widget.TabHost; import android.widget.TabHost.TabSpec; import android.view.LayoutInflater; import android.view.View; public class Tab1 extends TabActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); TabHost tabHost = getTabHost(); LayoutInflater.from(this).inflate(R.layout.main,tabHost.getTabContentView(), true); tabHost.addTab(tabHost.newTabSpec("tab1") .setIndicator("tab1") .setContent(R.id.view1)); tabHost.addTab(tabHost.newTabSpec("tab2") .setIndicator("tab2") .setContent(R.id.view2)); tabHost.addTab(tabHost.newTabSpec("tab3") .setIndicator("tab3") .setContent(R.id.view3)); } } Here is the log: 06-13 17:24:38.336: WARN/jdwp(262): Debugger is telling the VM to exit with code=1 06-13 17:24:38.336: INFO/dalvikvm(262): GC lifetime allocation: 2511 bytes 06-13 17:24:38.416: DEBUG/Zygote(30): Process 262 exited cleanly (1) 06-13 17:24:38.456: INFO/ActivityManager(54): Process com.example.android.apis.view (pid 262) has died. 06-13 17:24:38.696: INFO/UsageStats(54): Unexpected resume of com.android.launcher while already resumed in com.example.android.apis.view 06-13 17:24:38.736: WARN/InputManagerService(54): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@44dc4b38 06-13 17:24:48.337: DEBUG/AndroidRuntime(269): AndroidRuntime START <<<<<<<<<<<<<< 06-13 17:24:48.346: DEBUG/AndroidRuntime(269): CheckJNI is ON 06-13 17:24:48.856: DEBUG/AndroidRuntime(269): --- registering native functions --- 06-13 17:24:49.596: DEBUG/ddm-heap(269): Got feature list request 06-13 17:24:50.576: DEBUG/AndroidRuntime(269): Shutting down VM 06-13 17:24:50.576: DEBUG/dalvikvm(269): DestroyJavaVM waiting for non-daemon threads to exit 06-13 17:24:50.576: DEBUG/dalvikvm(269): DestroyJavaVM shutting VM down 06-13 17:24:50.576: DEBUG/dalvikvm(269): HeapWorker thread shutting down 06-13 17:24:50.586: DEBUG/dalvikvm(269): HeapWorker thread has shut down 06-13 17:24:50.586: DEBUG/jdwp(269): JDWP shutting down net... 06-13 17:24:50.586: INFO/dalvikvm(269): Debugger has detached; object registry had 1 entries 06-13 17:24:50.596: ERROR/AndroidRuntime(269): ERROR: thread attach failed 06-13 17:24:50.606: DEBUG/dalvikvm(269): VM cleaning up 06-13 17:24:50.676: DEBUG/dalvikvm(269): LinearAlloc 0x0 used 628628 of 5242880 (11%) 06-13 17:24:51.476: DEBUG/AndroidRuntime(278): AndroidRuntime START <<<<<<<<<<<<<< 06-13 17:24:51.486: DEBUG/AndroidRuntime(278): CheckJNI is ON 06-13 17:24:51.986: DEBUG/AndroidRuntime(278): --- registering native functions --- 06-13 17:24:52.746: DEBUG/ddm-heap(278): Got feature list request 06-13 17:24:53.716: DEBUG/ActivityManager(54): Uninstalling process com.example.android.apis.view 06-13 17:24:53.726: INFO/ActivityManager(54): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.android.apis.view/.Tab1 } 06-13 17:24:53.876: DEBUG/AndroidRuntime(278): Shutting down VM 06-13 17:24:53.886: DEBUG/dalvikvm(278): DestroyJavaVM waiting for non-daemon threads to exit 06-13 17:24:53.916: DEBUG/dalvikvm(278): DestroyJavaVM shutting VM down 06-13 17:24:53.926: DEBUG/dalvikvm(278): HeapWorker thread shutting down 06-13 17:24:53.936: DEBUG/dalvikvm(278): HeapWorker thread has shut down 06-13 17:24:53.936: DEBUG/jdwp(278): JDWP shutting down net... 06-13 17:24:53.936: INFO/dalvikvm(278): Debugger has detached; object registry had 1 entries 06-13 17:24:53.957: DEBUG/dalvikvm(278): VM cleaning up 06-13 17:24:54.026: ERROR/AndroidRuntime(278): ERROR: thread attach failed 06-13 17:24:54.146: DEBUG/dalvikvm(278): LinearAlloc 0x0 used 638596 of 5242880 (12%) 06-13 17:24:54.286: INFO/ActivityManager(54): Start proc com.example.android.apis.view for activity com.example.android.apis.view/.Tab1: pid=285 uid=10054 gids={1015} 06-13 17:24:54.676: DEBUG/ddm-heap(285): Got feature list request 06-13 17:24:55.006: WARN/ActivityThread(285): Application com.example.android.apis.view is waiting for the debugger on port 8100... 06-13 17:24:55.126: INFO/System.out(285): Sending WAIT chunk 06-13 17:24:55.186: INFO/dalvikvm(285): Debugger is active 06-13 17:24:55.378: INFO/System.out(285): Debugger has connected 06-13 17:24:55.386: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:55.586: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:55.796: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:55.996: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:56.196: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:56.406: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:56.606: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:56.806: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:57.016: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:57.216: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:57.416: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:57.626: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:57.836: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:58.039: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:58.246: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:58.451: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:58.656: INFO/System.out(285): waiting for debugger to settle... 06-13 17:24:58.866: INFO/System.out(285): debugger has settled (1367) 06-13 17:24:59.126: ERROR/gralloc(54): [unregister] handle 0x129980 still locked (state=40000001) 06-13 17:25:03.816: WARN/ActivityManager(54): Launch timeout has expired, giving up wake lock! 06-13 17:25:04.906: WARN/ActivityManager(54): Activity idle timeout for HistoryRecord{44d60e10 com.example.android.apis.view/.Tab1}

    Read the article

  • Session Report - Java on the Raspberry Pi

    - by Janice J. Heiss
    On mid-day Wednesday, the always colorful Oracle Evangelist Simon Ritter demonstrated Java on the Raspberry Pi at his session, “Do You Like Coffee with Your Dessert?”. The Raspberry Pi consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there is a single feature that makes the Raspberry Pi significant,” observed Ritter, “but a combination of things really makes it stand out. First, it's $35 for what is effectively a completely usable computer. You do have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM (Advanced RISC Machine and Acorn RISC Machine) processor is noteworthy, because it avoids problems like cooling (no heat sink or fan) and can use a USB power brick. When you add in the enormous community support, it offers a great platform for teaching everyone about computing.”Some 200 enthusiastic attendees were present at the session which had the feel of Simon Ritter sharing a fun toy with friends. The main point of the session was to show what Oracle was doing to support Java on the Raspberry Pi in a way that is entertaining and fun. Ritter pointed out that, in addition to being great for teaching, it’s an excellent introduction to the ARM architecture, and runs well with Java and will get better once it has official hard float support. The possibilities are vast.Ritter explained that the Raspberry Pi Project started in 2006 with the goal of devising a computer to inspire children; it drew inspiration from the BBC Micro literacy project of 1981 that produced a series of microcomputers created by the Acorn Computer company. It was officially launched on February 29, 2012, with a first production of 10,000 boards. There were 100,000 pre-orders in one day; currently about 4,000 boards are produced a day. Ritter described the specification as follows:* CPU: ARM 11 core running at 700MHz Broadcom SoC package Can now be overclocked to 1GHz (without breaking the warranty!) * Memory: 256Mb* I/O: HDMI and composite video 2 x USB ports (Model B only) Ethernet (Model B only) Header pins for GPIO, UART, SPI and I2C He took attendees through a brief history of ARM Architecture:* Acorn BBC Micro (6502 based) Not powerful enough for Acorn’s plans for a business computer * Berkeley RISC Project UNIX kernel only used 30% of instruction set of Motorola 68000 More registers, less instructions (Register windows) One chip architecture to come from this was… SPARC * Acorn RISC Machine (ARM) 32-bit data, 26-bit address space, 27 registers First machine was Acorn Archimedes * Spin off from Acorn, Advanced RISC MachinesNext he presented its features:* 32-bit RISC Architecture–  ARM accounts for 75% of embedded 32-bit CPUs today– 6.1 Billion chips sold last year (zero manufactured by ARM)* Abstract architecture and microprocessor core designs– Raspberry Pi is ARM11 using ARMv6 instruction set* Low power consumption– Good for mobile devices– Raspberry Pi can be powered from 700mA 5V only PSU– Raspberry Pi does not require heatsink or fanHe described the current ARM Technology:* ARMv6– ARM 11, ARM Cortex-M* ARMv7– ARM Cortex-A, ARM Cortex-M, ARM Cortex-R* ARMv8 (Announced)– Will support 64-bit data and addressingHe next gave the Java Specifics for ARM: Floating point operations* Despite being an ARMv6 processor it does include an FPU– FPU only became standard as of ARMv7* FPU (Hard Float, or HF) is much faster than a software library* Linux distros and Oracle JVM for ARM assume no HF on ARMv6– Need special build of both– Raspbian distro build now available– Oracle JVM is in the works, release date TBDNot So RISCPerformance Improvements* DSP Enhancements* Jazelle* Thumb / Thumb2 / ThumbEE* Floating Point (VFP)* NEON* Security Enhancements (TrustZone)He spent a few minutes going over the challenges of using Java on the Raspberry Pi and covered:* Sound* Vision * Serial (TTL UART)* USB* GPIOTo implement sound with Java he pointed out:* Sound drivers are now included in new distros* Java Sound API– Remember to add audio to user’s groups– Some bits work, others not so much* Playing (the right format) WAV file works* Using MIDI hangs trying to open a synthesizer* FreeTTS text-to-speech– Should work once sound works properlyHe turned to JavaFX on the Raspberry Pi:* Currently internal builds only– Will be released as technology preview soon* Work involves optimal implementation of Prism graphics engine– X11?* Once the JavaFX implementation is completed there will be little of concern to developers-- It’s just Java (WORA). He explained the basis of the Serial Port:* UART provides TTL level signals (3.3V)* RS-232 uses 12V signals* Use MAX3232 chip to convert* Use this for access to serial consoleHe summarized his key points. The Raspberry Pi is a very cool (and cheap) computer that is great for teaching, a great introduction to ARM that works very well with Java and will work better in the future. The opportunities are limitless. For further info, check out, Raspberry Pi User Guide by Eben Upton and Gareth Halfacree. From there, Ritter tried out several fun demos, some of which worked better than others, but all of which were greeted with considerable enthusiasm and support and good humor (even when he ran into some glitches).  All in all, this was a fun and lively session.

    Read the article

  • T-SQL: Dynamic Where clause in normal SQL statement

    - by Torben H.
    Hey there, I looking for a way to dynamicly add a filter to my statment without using dynamic SQL. I want to select all computers from a table, but when I pass a computer id to the sp, I want to get only this computer. Actually I try this on DECLARE @ComputerFilter AS INT DECLARE @ComputerID AS INT SELECT Computername FROM Computer WHERE (ComputerID = @ComputerID) OR (@ComputerFilter IS NULL)) But this is 100 times slow then this statment and tooks as long as SELECT * FROM Computer SELECT Computername FROM Computer WHERE ComputerID = @ComputerID Is there a way to speed this statment up or is there any other way to solve this problem with one select und without dynamic sql?

    Read the article

  • About Network Address Translation (NAT)?

    - by Rudi
    Just curious about a particular scenario of NAT. Let's suppose we have 4 computers sharing a global IP address under the NAT. I understand that the NAT box keeps an internal record to know which computer to forward requests to. But let's say on computer #2 I'm trying to download a file. And let's say on computer #1, #3, and #4, I'm just browsing the web normally. When the browser initiates a TCP connection to get that file, how does it know which computer to give it to? I mean like, each of the four computers is using port 80 to browse the web right? How does the NAT's record distinguish which "port 80" belongs to which computer?

    Read the article

  • my realtime network receiving time differs a lot, anyone can help?

    - by sguox002
    I wrote a program using tcpip sockets to send commands to a device and receive the data from the device. The data size would be around 200kB to 600KB. The computer is directly connected to the device using a 100MB network. I found that the sending packets always arrive at the computer at 100MB/s speed (I have debugging information on the unit and I also verified this using some network monitoring software), but the receiving time differs a lot from 40ms to 250ms, even if the size is the same (I have a receiving buffer about 700K and the receiving window of 8092 bytes and changing the window size does not change anything). The phenomena differs also on different computers, but on the same computer the problem is very stable. For example, receiving 300k bytes on computer a would be 40ms, but it may cost 200ms on another computer. I have disabled firewall, antivirus, all other network protocol except the TCP/IP. Any experts on this can give me some hints?

    Read the article

  • SharePoint 2010 Diagnostic Studio Remote Diag

    - by juanlarios
    I have had some time this week to try out some tools that I have been meaning to try out. This week I am trying out the SP 2010 Diagnostic Studio. I installed it successfully and tried it on my development evironment. I was able to build a report and a snapshot of the environment. I decided to turn my attention to my Employer's intranet environment. This would allow me to analyze it and measure it against benchmarks. I didn't want to install the Diagnostic studio on the Production Envorinment, lucky for me, the Diagnostic studio can be run remotely, well...kind of. Issue My development environment is a stand alone, full installation of SharePoint 2010 Server. It has Office 2010, SQL 2008 Enterprise, a DC...well you get the point, it's jammed packed! But more importantly it's a stand alone, self contained VM environment. Well Microsoft has instructions as to how to connect remotely with Diagnostic Studio here. The deciving part of this is that the SP2010DS prompts you for credentails. So I thought I was getting the right account to run the reports. I tried all the Power Shell commands in the link above but I still ended up getting the following errors: 06/28/2011 12:50:18    Connecting to remote server failed with the following error message : The WinRM client cannot process the request...If the SPN exists, but CredSSP cannot use Kerberos to validate the identity of the target computer and you still want to allow the delegation of the user credentials to the target computer, use gpedit.msc and look at the following policy: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Fresh Credentials with NTLM-only Server Authentication.  Verify that it is enabled and configured with an SPN appropriate for the target computer. For example, for a target computer name "myserver.domain.com", the SPN can be one of the following: WSMAN/myserver.domain.com or WSMAN/*.domain.com. Try the request again after these changes. For more information, see the about_Remote_Troubleshooting Help topic. 06/28/2011 12:54:47    Access to the path '\\<targetserver>\C$\Users\<account logging in>\AppData\Local\Temp' is denied. You might also get an error message like this: The WinRM client cannot process the request. A computer policy does not allow the delegation of the user credentials to the target computer. Explanation After looking at the event logs on the target environment, I noticed that there were a several Security Exceptions. After looking at the specifics around who was denied access, I was able to see the account that was being denied access, it was the client machine administrator account. Well of course that was never going to work!!! After some quick Googling, the last error message above will lead you to edit the Local Group Policy on the client server. And although there are instructions from microsoft around doing this, it really will not work in this scenario. Notice the Description and how it only applices to authentication mentioned? Resolution I can tell you what I did, but I wish there was a better way but I simply don't know if it's duable any other way. Because my development environment had it's own DC, I didn't really want to mess with Kerberos authentication. I would also not be smart to connect that server to the domain, considering it has it's own DC. I ended up installing SharePoint 2010 Diagnostic Studio on another Windows 7 Dev environment I have, and connected the machien to the domain. I ran all the necesary remote credentials commands mentioned here. Those commands add the group policy for you! Once I did this I was able to authenticate properly and I was able to get the reports. Conclusion   You can run SharePoint 2010 Diagnostic Studio Remotely but it will require some specific scenarions. A couple of things I should mention is that as far as I understand, SP2010 DS, will install agents on your target environment to run tests and retrieve the data. I was a Farm Administrator, and also a Server Admin on SharePoint Server. I am not 100% sure if you need all those permissions but I that's just what I have to my internal intranet.   I deally I would like to have a machine that I can have SharePoint 2010 DIagnostic Studio installed and I can run that against client environments. It appears that I will not be able to do that, unless I enable Kerberos on my Windows 7 Machine now. If you have it installed in the same way I would like to have it, please let me know, I'll keep trying to get what I'm after. Hope this helps someone out there doing the same.

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • How to debug without Visual Studio?

    - by aF
    Hello, Python - c++ dll - c# dll I have a com interop c# dll that is loaded in a wrapper c++ dll throught the .tlb file generated in c# to be used in a python project. When I run in my computer it works fine but when I run in a computer that just got formated it gives: WindowsError: exception code 0xe0434f4d I have the redistribute c++ installed and the .net compact framework 3.5 on the formatted computer. How can I see what is the correct exception on a computer that does not have visual studio installed? How can I debug all of this? I can't debug the dll's itself can I? Note: in my computer all works well so maybe is some dll or file missing. I allready used Dependency Walker to see if there's some dll missing, and nop!

    Read the article

  • WPF: Menu items and combo boxes don't render in Windows 7 64-bit

    - by lilserf
    I'm trying to use an existing internal WPF application (I do have access to the source), but it was developed on XP and I'm using Windows7 64-bit. When I click (for instance) the File menu, 90% of the time I see no drop-down menu at all. The menu still exists - I can use the arrow keys to navigate up and down and choose an option if I happen to know the order of the options, but nothing renders at all. The other 10% of the time, the menu or some portion of it DOES render, but as I move the cursor up and down I get graphical corruption or disappearing options until I end up back at the "no menu is visible at all" state. This is also true of combo boxes within the application - they show no data when I drop them down, but I can arrow down and choose an entry. Microsoft has some advice about WPF rendering issues here but none of these steps has helped with my issue.

    Read the article

  • Asynchronous Webcrawling F#, something wrong ?

    - by jlezard
    Not quite sure if it is ok to do this but, my question is: Is there something wrong with my code ? It doesn't go as fast as I would like, and since I am using lots of async workflows maybe I am doing something wrong. The goal here is to build something that can crawl 20 000 pages in less than an hour. open System open System.Text open System.Net open System.IO open System.Text.RegularExpressions open System.Collections.Generic open System.ComponentModel open Microsoft.FSharp open System.Threading //This is the Parallel.Fs file type ComparableUri ( uri: string ) = inherit System.Uri( uri ) let elts (uri:System.Uri) = uri.Scheme, uri.Host, uri.Port, uri.Segments interface System.IComparable with member this.CompareTo( uri2 ) = compare (elts this) (elts(uri2 :?> ComparableUri)) override this.Equals(uri2) = compare this (uri2 :?> ComparableUri ) = 0 override this.GetHashCode() = 0 ///////////////////////////////////////////////Funtions to retreive html string////////////////////////////// let mutable error = Set.empty<ComparableUri> let mutable visited = Set.empty<ComparableUri> let getHtmlPrimitiveAsyncDelay (delay:int) (uri : ComparableUri) = async{ try let req = (WebRequest.Create(uri)) :?> HttpWebRequest // 'use' is equivalent to ‘using’ in C# for an IDisposable req.UserAgent<-"Mozilla" //Console.WriteLine("Waiting") do! Async.Sleep(delay * 250) let! resp = (req.AsyncGetResponse()) Console.WriteLine(uri.AbsoluteUri+" got response after delay "+string delay) use stream = resp.GetResponseStream() use reader = new StreamReader(stream) let html = reader.ReadToEnd() return html with | _ as ex -> Console.WriteLine( ex.ToString() ) lock error (fun () -> error<- error.Add uri ) lock visited (fun () -> visited<-visited.Add uri ) return "BadUri" } ///////////////////////////////////////////////Active Pattern Matching to retreive href////////////////////////////// let (|Matches|_|) (pat:string) (inp:string) = let m = Regex.Matches(inp, pat) // Note the List.tl, since the first group is always the entirety of the matched string. if m.Count > 0 then Some (List.tail [ for g in m -> g.Value ]) else None let (|Match|_|) (pat:string) (inp:string) = let m = Regex.Match(inp, pat) // Note the List.tl, since the first group is always the entirety of the matched string. if m.Success then Some (List.tail [ for g in m.Groups -> g.Value ]) else None ///////////////////////////////////////////////Find Bad href////////////////////////////// let isEmail (link:string) = link.Contains("@") let isMailto (link:string) = if Seq.length link >=6 then link.[0..5] = "mailto" else false let isJavascript (link:string) = if Seq.length link >=10 then link.[0..9] = "javascript" else false let isBadUri (link:string) = link="BadUri" let isEmptyHttp (link:string) = link="http://" let isFile (link:string)= if Seq.length link >=6 then link.[0..5] = "file:/" else false let containsPipe (link:string) = link.Contains("|") let isAdLink (link:string) = if Seq.length link >=6 then link.[0..5] = "adlink" elif Seq.length link >=9 then link.[0..8] = "http://adLink" else false ///////////////////////////////////////////////Find Bad href////////////////////////////// let getHref (htmlString:string) = let urlPat = "href=\"([^\"]+)" match htmlString with | Matches urlPat urls -> urls |> List.map( fun href -> match href with | Match (urlPat) (link::[]) -> link | _ -> failwith "The href was not in correct format, there was more than one match" ) | _ -> Console.WriteLine( "No links for this page" );[] |> List.filter( fun link -> not(isEmail link) ) |> List.filter( fun link -> not(isMailto link) ) |> List.filter( fun link -> not(isJavascript link) ) |> List.filter( fun link -> not(isBadUri link) ) |> List.filter( fun link -> not(isEmptyHttp link) ) |> List.filter( fun link -> not(isFile link) ) |> List.filter( fun link -> not(containsPipe link) ) |> List.filter( fun link -> not(isAdLink link) ) let treatAjax (href:System.Uri) = let link = href.ToString() let firstPart = (link.Split([|"#"|],System.StringSplitOptions.None)).[0] new Uri(firstPart) //only follow pages with certain extnsion or ones with no exensions let followHref (href:System.Uri) = let valid2 = set[".py"] let valid3 = set[".php";".htm";".asp"] let valid4 = set[".php3";".php4";".php5";".html";".aspx"] let arrLength = href.Segments |> Array.length let lastExtension = (href.Segments).[arrLength-1] let lengthLastExtension = Seq.length lastExtension if (lengthLastExtension <= 3) then not( lastExtension.Contains(".") ) else //test for the 2 case let last4 = lastExtension.[(lengthLastExtension-1)-3..(lengthLastExtension-1)] let isValid2 = valid2|>Seq.exists(fun validEnd -> last4.EndsWith( validEnd) ) if isValid2 then true else if lengthLastExtension <= 4 then not( last4.Contains(".") ) else let last5 = lastExtension.[(lengthLastExtension-1)-4..(lengthLastExtension-1)] let isValid3 = valid3|>Seq.exists(fun validEnd -> last5.EndsWith( validEnd) ) if isValid3 then true else if lengthLastExtension <= 5 then not( last5.Contains(".") ) else let last6 = lastExtension.[(lengthLastExtension-1)-5..(lengthLastExtension-1)] let isValid4 = valid4|>Seq.exists(fun validEnd -> last6.EndsWith( validEnd) ) if isValid4 then true else not( last6.Contains(".") ) && not(lastExtension.[0..5] = "mailto") //Create the correct links / -> add the homepage , make them a comparabel Uri let hrefLinksToUri ( uri:ComparableUri ) (hrefLinks:string list) = hrefLinks |> List.map( fun link -> try if Seq.length link <4 then Some(new Uri( uri, link )) else if link.[0..3] = "http" then Some(new Uri(link)) else Some(new Uri( uri, link )) with | _ as ex -> Console.WriteLine(link); lock error (fun () ->error<-error.Add uri) None ) |> List.filter( fun link -> link.IsSome ) |> List.map( fun o -> o.Value) |> List.map( fun uri -> new ComparableUri( string uri ) ) //Treat uri , removing ajax last part , and only following links specified b Benoit let linksToFollow (hrefUris:ComparableUri list) = hrefUris |>List.map( treatAjax ) |>List.filter( fun link -> followHref link ) |>List.map( fun uri -> new ComparableUri( string uri ) ) |>Set.ofList let needToVisit uri = ( lock visited (fun () -> not( visited.Contains uri) ) ) && (lock error (fun () -> not( error.Contains uri) )) let getLinksToFollowAsyncDelay (delay:int) ( uri: ComparableUri ) = async{ let! links = getHtmlPrimitiveAsyncDelay delay uri lock visited (fun () ->visited<-visited.Add uri) let linksToFollow = getHref links |> hrefLinksToUri uri |> linksToFollow |> Set.filter( needToVisit ) |> Set.map( fun link -> if uri.Authority=link.Authority then link else link ) return linksToFollow } //Add delays if visitng same authority let getDelay(uri:ComparableUri) (authorityDelay:Dictionary<string,int>) = let uriAuthority = uri.Authority let hasAuthority,delay = authorityDelay.TryGetValue(uriAuthority) if hasAuthority then authorityDelay.[uriAuthority] <-delay+1 delay else authorityDelay.Add(uriAuthority,1) 0 let rec getLinksToFollowFromSetAsync maxIteration ( uris: seq<ComparableUri> ) = let authorityDelay = Dictionary<string,int>() if maxIteration = 100 then Console.WriteLine("Finished") else //Unite by authority add delay for those we same authority others ignore let stopwatch= System.Diagnostics.Stopwatch() stopwatch.Start() let newLinks = uris |> Seq.map( fun uri -> let delay = lock authorityDelay (fun () -> getDelay uri authorityDelay ) getLinksToFollowAsyncDelay delay uri ) |> Async.Parallel |> Async.RunSynchronously |> Seq.concat stopwatch.Stop() Console.WriteLine("\n\n\n\n\n\n\nTimeElapse : "+string stopwatch.Elapsed+"\n\n\n\n\n\n\n\n\n") getLinksToFollowFromSetAsync (maxIteration+1) newLinks getLinksToFollowFromSetAsync 0 (seq[ComparableUri( "http://twitter.com/" )]) Console.WriteLine("Finished") Some feedBack would be great ! Thank you (note this is just something I am doing for fun)

    Read the article

  • Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and cach

    - by SeanMcAlinden
    I’ve recently started a project with a few mates to learn the ins and outs of Dependency Injection, AOP and a number of other pretty crucial patterns of development as we’ve all been using these patterns for a while but have relied totally on third part solutions to do the magic. We thought it would be interesting to really get into the details by rolling our own IoC container and hopefully learn a lot on the way, and you never know, we might even create an excellent framework. The open source project is called Rapid IoC and is hosted at http://rapidioc.codeplex.com/ One of the most interesting tasks for me is creating the dynamic proxy generator for enabling Aspect Orientated Programming (AOP). In this series of articles, I’m going to track each step I take for creating the dynamic proxy generator and I’ll try my best to explain what everything means - mainly as I’ll be using Reflection.Emit to emit a fair amount of intermediate language code (IL) to create the proxy types at runtime which can be a little taxing to read. It’s worth noting that building the proxy is without a doubt going to be slightly painful so I imagine there will be plenty of areas I’ll need to change along the way. Anyway lets get started…   Part 1 - Creating the Assembly builder, Module builder and caching mechanism Part 1 is going to be a really nice simple start, I’m just going to start by creating the assembly, module and type caches. The reason we need to create caches for the assembly, module and types is simply to save the overhead of recreating proxy types that have already been generated, this will be one of the important steps to ensure that the framework is fast… kind of important as we’re calling the IoC container ‘Rapid’ – will be a little bit embarrassing if we manage to create the slowest framework. The Assembly builder The assembly builder is what is used to create an assembly at runtime, we’re going to have two overloads, one will be for the actual use of the proxy generator, the other will be mainly for testing purposes as it will also save the assembly so we can use Reflector to examine the code that has been created. Here’s the code: DynamicAssemblyBuilder using System; using System.Reflection; using System.Reflection.Emit; namespace Rapid.DynamicProxy.Assembly {     /// <summary>     /// Class for creating an assembly builder.     /// </summary>     internal static class DynamicAssemblyBuilder     {         #region Create           /// <summary>         /// Creates an assembly builder.         /// </summary>         /// <param name="assemblyName">Name of the assembly.</param>         public static AssemblyBuilder Create(string assemblyName)         {             AssemblyName name = new AssemblyName(assemblyName);               AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(                     name, AssemblyBuilderAccess.Run);               DynamicAssemblyCache.Add(assembly);               return assembly;         }           /// <summary>         /// Creates an assembly builder and saves the assembly to the passed in location.         /// </summary>         /// <param name="assemblyName">Name of the assembly.</param>         /// <param name="filePath">The file path.</param>         public static AssemblyBuilder Create(string assemblyName, string filePath)         {             AssemblyName name = new AssemblyName(assemblyName);               AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(                     name, AssemblyBuilderAccess.RunAndSave, filePath);               DynamicAssemblyCache.Add(assembly);               return assembly;         }           #endregion     } }   So hopefully the above class is fairly explanatory, an AssemblyName is created using the passed in string for the actual name of the assembly. An AssemblyBuilder is then constructed with the current AppDomain and depending on the overload used, it is either just run in the current context or it is set up ready for saving. It is then added to the cache.   DynamicAssemblyCache using System.Reflection.Emit; using Rapid.DynamicProxy.Exceptions; using Rapid.DynamicProxy.Resources.Exceptions;   namespace Rapid.DynamicProxy.Assembly {     /// <summary>     /// Cache for storing the dynamic assembly builder.     /// </summary>     internal static class DynamicAssemblyCache     {         #region Declarations           private static object syncRoot = new object();         internal static AssemblyBuilder Cache = null;           #endregion           #region Adds a dynamic assembly to the cache.           /// <summary>         /// Adds a dynamic assembly builder to the cache.         /// </summary>         /// <param name="assemblyBuilder">The assembly builder.</param>         public static void Add(AssemblyBuilder assemblyBuilder)         {             lock (syncRoot)             {                 Cache = assemblyBuilder;             }         }           #endregion           #region Gets the cached assembly                  /// <summary>         /// Gets the cached assembly builder.         /// </summary>         /// <returns></returns>         public static AssemblyBuilder Get         {             get             {                 lock (syncRoot)                 {                     if (Cache != null)                     {                         return Cache;                     }                 }                   throw new RapidDynamicProxyAssertionException(AssertionResources.NoAssemblyInCache);             }         }           #endregion     } } The cache is simply a static property that will store the AssemblyBuilder (I know it’s a little weird that I’ve made it public, this is for testing purposes, I know that’s a bad excuse but hey…) There are two methods for using the cache – Add and Get, these just provide thread safe access to the cache.   The Module Builder The module builder is required as the create proxy classes will need to live inside a module within the assembly. Here’s the code: DynamicModuleBuilder using System.Reflection.Emit; using Rapid.DynamicProxy.Assembly; namespace Rapid.DynamicProxy.Module {     /// <summary>     /// Class for creating a module builder.     /// </summary>     internal static class DynamicModuleBuilder     {         /// <summary>         /// Creates a module builder using the cached assembly.         /// </summary>         public static ModuleBuilder Create()         {             string assemblyName = DynamicAssemblyCache.Get.GetName().Name;               ModuleBuilder moduleBuilder = DynamicAssemblyCache.Get.DefineDynamicModule                 (assemblyName, string.Format("{0}.dll", assemblyName));               DynamicModuleCache.Add(moduleBuilder);               return moduleBuilder;         }     } } As you can see, the module builder is created on the assembly that lives in the DynamicAssemblyCache, the module is given the assembly name and also a string representing the filename if the assembly is to be saved. It is then added to the DynamicModuleCache. DynamicModuleCache using System.Reflection.Emit; using Rapid.DynamicProxy.Exceptions; using Rapid.DynamicProxy.Resources.Exceptions; namespace Rapid.DynamicProxy.Module {     /// <summary>     /// Class for storing the module builder.     /// </summary>     internal static class DynamicModuleCache     {         #region Declarations           private static object syncRoot = new object();         internal static ModuleBuilder Cache = null;           #endregion           #region Add           /// <summary>         /// Adds a dynamic module builder to the cache.         /// </summary>         /// <param name="moduleBuilder">The module builder.</param>         public static void Add(ModuleBuilder moduleBuilder)         {             lock (syncRoot)             {                 Cache = moduleBuilder;             }         }           #endregion           #region Get           /// <summary>         /// Gets the cached module builder.         /// </summary>         /// <returns></returns>         public static ModuleBuilder Get         {             get             {                 lock (syncRoot)                 {                     if (Cache != null)                     {                         return Cache;                     }                 }                   throw new RapidDynamicProxyAssertionException(AssertionResources.NoModuleInCache);             }         }           #endregion     } }   The DynamicModuleCache is very similar to the assembly cache, it is simply a statically stored module with thread safe Add and Get methods.   The DynamicTypeCache To end off this post, I’m going to create the cache for storing the generated proxy classes. I’ve spent a fair amount of time thinking about the type of collection I should use to store the types and have finally decided that for the time being I’m going to use a generic dictionary. This may change when I can actually performance test the proxy generator but the time being I think it makes good sense in theory, mainly as it pretty much maintains it’s performance with varying numbers of items – almost constant (0)1. Plus I won’t ever need to loop through the items which is not the dictionaries strong point. Here’s the code as it currently stands: DynamicTypeCache using System; using System.Collections.Generic; using System.Security.Cryptography; using System.Text; namespace Rapid.DynamicProxy.Types {     /// <summary>     /// Cache for storing proxy types.     /// </summary>     internal static class DynamicTypeCache     {         #region Declarations           static object syncRoot = new object();         public static Dictionary<string, Type> Cache = new Dictionary<string, Type>();           #endregion           /// <summary>         /// Adds a proxy to the type cache.         /// </summary>         /// <param name="type">The type.</param>         /// <param name="proxy">The proxy.</param>         public static void AddProxyForType(Type type, Type proxy)         {             lock (syncRoot)             {                 Cache.Add(GetHashCode(type.AssemblyQualifiedName), proxy);             }         }           /// <summary>         /// Tries the type of the get proxy for.         /// </summary>         /// <param name="type">The type.</param>         /// <returns></returns>         public static Type TryGetProxyForType(Type type)         {             lock (syncRoot)             {                 Type proxyType;                 Cache.TryGetValue(GetHashCode(type.AssemblyQualifiedName), out proxyType);                 return proxyType;             }         }           #region Private Methods           private static string GetHashCode(string fullName)         {             SHA1CryptoServiceProvider provider = new SHA1CryptoServiceProvider();             Byte[] buffer = Encoding.UTF8.GetBytes(fullName);             Byte[] hash = provider.ComputeHash(buffer, 0, buffer.Length);             return Convert.ToBase64String(hash);         }           #endregion     } } As you can see, there are two public methods, one for adding to the cache and one for getting from the cache. Hopefully they should be clear enough, the Get is a TryGet as I do not want the dictionary to throw an exception if a proxy doesn’t exist within the cache. Other than that I’ve decided to create a key using the SHA1CryptoServiceProvider, this may change but my initial though is the SHA1 algorithm is pretty fast to put together using the provider and it is also very unlikely to have any hashing collisions. (there are some maths behind how unlikely this is – here’s the wiki if you’re interested http://en.wikipedia.org/wiki/SHA_hash_functions)   Anyway, that’s the end of part 1 – although I haven’t started any of the fun stuff (by fun I mean hairpulling, teeth grating Relfection.Emit style fun), I’ve got the basis of the DynamicProxy in place so all we have to worry about now is creating the types, interceptor classes, method invocation information classes and finally a really nice fluent interface that will abstract all of the hard-core craziness away and leave us with a lightning fast, easy to use AOP framework. Hope you find the series interesting. All of the source code can be viewed and/or downloaded at our codeplex site - http://rapidioc.codeplex.com/ Kind Regards, Sean.

    Read the article

  • Curious about IObservable? Here’s a quick example to get you started!

    - by Roman Schindlauer
    Have you heard about IObservable/IObserver support in Microsoft StreamInsight 1.1? Then you probably want to try it out. If this is your first incursion into the IObservable/IObserver pattern, this blog post is for you! StreamInsight 1.1 introduced the ability to use IEnumerable and IObservable objects as event sources and sinks. The IEnumerable case is pretty straightforward, since many data collections are already surfacing as this type. This was already covered by Colin in his blog. Creating your own IObservable event source is a little more involved but no less exciting – here is a primer: First, let’s look at a very simple Observable data source. All it does is publish an integer in regular time periods to its registered observers. (For more information on IObservable, see http://msdn.microsoft.com/en-us/library/dd990377.aspx ). sealed class RandomSubject : IObservable<int>, IDisposable {     private bool _done;     private readonly List<IObserver<int>> _observers;     private readonly Random _random;     private readonly object _sync;     private readonly Timer _timer;     private readonly int _timerPeriod;       /// <summary>     /// Random observable subject. It produces an integer in regular time periods.     /// </summary>     /// <param name="timerPeriod">Timer period (in milliseconds)</param>     public RandomSubject(int timerPeriod)     {         _done = false;         _observers = new List<IObserver<int>>();         _random = new Random();         _sync = new object();         _timer = new Timer(EmitRandomValue);         _timerPeriod = timerPeriod;         Schedule();     }       public IDisposable Subscribe(IObserver<int> observer)     {         lock (_sync)         {             _observers.Add(observer);         }         return new Subscription(this, observer);     }       public void OnNext(int value)     {         lock (_sync)         {             if (!_done)             {                 foreach (var observer in _observers)                 {                     observer.OnNext(value);                 }             }         }     }       public void OnError(Exception e)     {         lock (_sync)         {             foreach (var observer in _observers)             {                 observer.OnError(e);             }             _done = true;         }     }       public void OnCompleted()     {         lock (_sync)         {             foreach (var observer in _observers)             {                 observer.OnCompleted();             }             _done = true;         }     }       void IDisposable.Dispose()     {         _timer.Dispose();     }       private void Schedule()     {         lock (_sync)         {             if (!_done)             {                 _timer.Change(_timerPeriod, Timeout.Infinite);             }         }     }       private void EmitRandomValue(object _)     {         var value = (int)(_random.NextDouble() * 100);         Console.WriteLine("[Observable]\t" + value);         OnNext(value);         Schedule();     }       private sealed class Subscription : IDisposable     {         private readonly RandomSubject _subject;         private IObserver<int> _observer;           public Subscription(RandomSubject subject, IObserver<int> observer)         {             _subject = subject;             _observer = observer;         }           public void Dispose()         {             IObserver<int> observer = _observer;             if (null != observer)             {                 lock (_subject._sync)                 {                     _subject._observers.Remove(observer);                 }                 _observer = null;             }         }     } }   So far, so good. Now let’s write a program that consumes data emitted by the observable as a stream of point events in a Streaminsight query. First, let’s define our payload type: class Payload {     public int Value { get; set; }       public override string ToString()     {         return "[StreamInsight]\tValue: " + Value.ToString();     } }   Now, let’s write the program. First, we will instantiate the observable subject. Then we’ll use the ToPointStream() method to consume it as a stream. We can now write any query over the source - here, a simple pass-through query. class Program {     static void Main(string[] args)     {         Console.WriteLine("Starting observable source...");         using (var source = new RandomSubject(500))         {             Console.WriteLine("Started observable source.");             using (var server = Server.Create("Default"))             {                 var application = server.CreateApplication("My Application");                   var stream = source.ToPointStream(application,                     e => PointEvent.CreateInsert(DateTime.Now, new Payload { Value = e }),                     AdvanceTimeSettings.StrictlyIncreasingStartTime,                     "Observable Stream");                   var query = from e in stream                             select e;                   [...]   We’re done with consuming input and querying it! But you probably want to see the output of the query. Did you know you can turn a query into an observable subject as well? Let’s do precisely that, and exploit the Reactive Extensions for .NET (http://msdn.microsoft.com/en-us/devlabs/ee794896.aspx) to quickly visualize the output. Notice we’re subscribing “Console.WriteLine()” to the query, a pattern you may find useful for quick debugging of your queries. Reminder: you’ll need to install the Reactive Extensions for .NET (Rx for .NET Framework 4.0), and reference System.CoreEx and System.Reactive in your project.                 [...]                   Console.ReadLine();                 Console.WriteLine("Starting query...");                 using (query.ToObservable().Subscribe(Console.WriteLine))                 {                     Console.WriteLine("Started query.");                     Console.ReadLine();                     Console.WriteLine("Stopping query...");                 }                 Console.WriteLine("Stopped query.");             }             Console.ReadLine();             Console.WriteLine("Stopping observable source...");             source.OnCompleted();         }         Console.WriteLine("Stopped observable source.");     } }   We hope this blog post gets you started. And for bonus points, you can go ahead and rewrite the observable source (the RandomSubject class) using the Reactive Extensions for .NET! The entire sample project is attached to this article. Happy querying! Regards, The StreamInsight Team

    Read the article

  • osx snow leopard freezes

    - by lydonchandra
    Hi I got this error: Timexxxx In 'CFPasteboardCopyData', file /SourceCache/CF/CF-550.13/AppServices.subproj/CFPasteboard.c, line 1951, during lock, spin lock 0x15457a0c8 has value 0xf0000000, which is neither locked nor unlocked. The memory has been smashed. My Mac freezes except the mouse and keyboard. What is happening and how can I prevent this from happening in the future?

    Read the article

  • Cannot Start Nginx Compiled from Source

    - by Jason Alan Kennedy
    I am trying to compile Nginx from source based on the original compiled Nginx server running on my DigitalOcean server ( Ubuntu-14.04 64x ) but with a few extra modules. I can get everything installed smoothly but I can not get it to start. I am sure the ini is correct because I copied the original source off the current running Nginx server [ Even though I see that Nginx now adds the ini when compiling fron source ]. Below is the [ lengthy process ] that I am performing - add sorry but I wanted to be thorough for those who are in need of the info ]. Because I am a newB to Nginx, I am sure I am missing something or just have it all wrong. If you may look over what I have done and see if you spot anything I need/need to change, I will greatly appreciate it. Thnx! With the original Nginx server still running: I check the current/running Nginx configuration so I can build the new Nginx instance the same but with the added modules: nginx -V # The out-put: configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module NOTE: The configure arguments below return errors during 'make' so I removed them. I don't know what they are - could this be related to my issue??? --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' Moving on: # So I don't have to sudo every line: sudo bash # Check for updates first thing: apt-get update # Install various prerequisites needed to compile Nginx: apt-get install build-essential libgd2-xpm-dev lsb-base zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libxslt1-dev libxml2 libssl-dev libgeoip-dev tar unzip openssl # Create System users [ if it doesn't exist - but I see its there on DigitalOceans' Droplets all-ready ]: adduser --system --no-create-home --disabled-login --disabled-password --group www-data # Download NGINX wget http://nginx.org/download/nginx-1.7.4.tar.gz tar -xvzf nginx-1.7.4.tar.gz # Then Google PageSpeed: wget https://github.com/pagespeed/ngx_pagespeed/archive/release-1.8.31.4-beta.zip unzip release-1.8.31.4-beta.zip # cd into the PageSpeed Directory cd ngx_pagespeed-release-1.8.31.4-beta/ # and add the PSOL files in there: wget https://dl.google.com/dl/page-speed/psol/1.8.31.4.tar.gz tar -xzvf 1.8.31.4.tar.gz # Get back to the root directory: cd # I add the ngx_cache_purge module and will install the Nginx Helper plugin for WP later: wget https://github.com/FRiCKLE/ngx_cache_purge/archive/2.1.zip unzip 2.1.zip # Add the headers-more-nginx-module: wget https://github.com/openresty/headers-more-nginx-module/archive/v0.25.zip unzip v0.25.zip # and the naxsi module for added security: wget https://github.com/nbs-system/naxsi/archive/0.53-2.tar.gz tar -xvzf 0.53-2.tar.gz # cd to the new Nginx directory cd nginx-1.7.4 # Set up the configuration build based on the current running Nginx config args and add my additional modules: ./configure \ --add-module=$HOME/naxsi-0.53-2/naxsi_src \ --prefix=/usr/share/nginx \ --conf-path=/etc/nginx/nginx.conf \ --http-log-path=/var/log/nginx/access.log \ --error-log-path=/var/log/nginx/error.log \ --lock-path=/var/lock/nginx.lock \ --pid-path=/run/nginx.pid \ --http-client-body-temp-path=/var/lib/nginx/body \ --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ --http-proxy-temp-path=/var/lib/nginx/proxy \ --http-scgi-temp-path=/var/lib/nginx/scgi \ --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \ --user=www-data \ --group=www-data \ --with-debug \ --with-pcre-jit \ --with-ipv6 \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_dav_module \ --with-http_geoip_module \ --with-http_gzip_static_module \ --with-http_image_filter_module \ --with-http_spdy_module \ --with-http_sub_module \ --with-http_xslt_module \ --with-mail \ --with-mail_ssl_module \ --add-module=$HOME/ngx_pagespeed-release-1.8.31.4-beta \ --add-module=$HOME/ngx_cache_purge-2.1 \ --add-module=$HOME/headers-more-nginx-module-0.25 [ENTER] Configuration Summary: Configuration summary + using system PCRE library + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library nginx path prefix: "/usr/share/nginx" nginx binary file: "/usr/share/nginx/sbin/nginx" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/run/nginx.pid" nginx error log file: "/var/log/nginx/error.log" nginx http access log file: "/var/log/nginx/access.log" nginx http client request body temporary files: "/var/lib/nginx/body" nginx http proxy temporary files: "/var/lib/nginx/proxy" nginx http fastcgi temporary files: "/var/lib/nginx/fastcgi" nginx http uwsgi temporary files: "/var/lib/nginx/uwsgi" nginx http scgi temporary files: "/var/lib/nginx/scgi" Next step: I cd to root and I check the old Nginx folder locations and double checked the 'make' output to see that they are the same: whereis nginx #Output: nginx: /usr/sbin/nginx /etc/nginx /usr/share/nginx NOTE: Not sure about the '/usr/sbin/nginx' - Possible issue??? Next I copy the old /etc/nginx/nginx.conf, /etc/nginx/sites-available/default, /etc/nginx/sites-enabled/default, /etc/init.d/nginx to a text file locally for safe keeping to use in the new Nginx server. Then stop the running Nginx server: service nginx stop , verify it's stopped: service --status-all and the output is: [ - ] nginx To verify that there are two Nginx directories, I cd to: cd nginx* and the output is an error indicating there are two nginx folders - Cool Beans! :) Now Install the new Nginx server: cd nginx-1.7.4 make install # INSTALL OUTPUT ######################################## make -f objs/Makefile install make[1]: Entering directory `/home/walkingfish/nginx-1.7.4' test -d '/usr/share/nginx' || mkdir -p '/usr/share/nginx' test -d '/usr/share/nginx/sbin' || mkdir -p '/usr/share/nginx/sbin' test ! -f '/usr/share/nginx/sbin/nginx' || mv '/usr/share/nginx/sbin/nginx' '/usr/share/nginx/sbin/nginx.old' cp objs/nginx '/usr/share/nginx/sbin/nginx' test -d '/etc/nginx' || mkdir -p '/etc/nginx' cp conf/koi-win '/etc/nginx' cp conf/koi-utf '/etc/nginx' cp conf/win-utf '/etc/nginx' test -f '/etc/nginx/mime.types' || cp conf/mime.types '/etc/nginx' cp conf/mime.types '/etc/nginx/mime.types.default' test -f '/etc/nginx/fastcgi_params' || cp conf/fastcgi_params '/etc/nginx' cp conf/fastcgi_params '/etc/nginx/fastcgi_params.default' test -f '/etc/nginx/fastcgi.conf' || cp conf/fastcgi.conf '/etc/nginx' cp conf/fastcgi.conf '/etc/nginx/fastcgi.conf.default' test -f '/etc/nginx/uwsgi_params' || cp conf/uwsgi_params '/etc/nginx' cp conf/uwsgi_params '/etc/nginx/uwsgi_params.default' test -f '/etc/nginx/scgi_params' || cp conf/scgi_params '/etc/nginx' cp conf/scgi_params '/etc/nginx/scgi_params.default' test -f '/etc/nginx/nginx.conf' || cp conf/nginx.conf '/etc/nginx/nginx.conf' cp conf/nginx.conf '/etc/nginx/nginx.conf.default' test -d '/run' || mkdir -p '/run' test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' test -d '/usr/share/nginx/html' || cp -R html '/usr/share/nginx' test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ######################################################### I copy/create the files that I saved earlier to txt files in sites-available, the config, default and ini files then symlink them to sites-enabled, and so on. And now to start the server: service nginx start And this is where s#!+ hits the fan - Nada. I check to see if Nginx is running with service --status-all and its not. Also with nginx -V and its not installed??? I reboot the system too and still nothing. So I am not sure what is wrong here. The ini was copied over from the old server along with all the other config files after deleting the old files. When I opened the new compiled files, the nginx default data was present so I replaced them with my old original data prior to starting the new server for the first time. Also to be safe, I rm /etc/nginx/sites-enabled/default and symlinked with ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default with no errors and I verified that the data was in the sites-enabled/default file. I don't think the server really/fully installed because of the nginx -V result: The program 'nginx' can be found in the following packages: * nginx-core * nginx-extras * nginx-full * nginx-light * nginx-naxsi Try: apt-get install <selected package> Do/should I apt-get install nginx-1.7.4 ?? Or what package do I use being that its a custom package and make install earlier did nothing?? If you need to see the conf files I copied over from the old to the custom server, LMK and I'll post them. Again your help here would be appreciated!

    Read the article

  • Boot From a USB Drive Even if your BIOS Won’t Let You

    - by Trevor Bekolay
    You’ve always got a trusty bootable USB flash drive with you to solve computer problems, but what if a PC’s BIOS won’t let you boot from USB? We’ll show you how to make a CD or floppy disk that will let you boot from your USB drive. This boot menu, like many created before USB drives became cheap and commonplace, does not include an option to boot from a USB drive. A piece of freeware called PLoP Boot Manager solves this problem, offering an image that can burned to a CD or put on a floppy disk, and enables you to boot to a variety of devices, including USB drives. Put PLoP on a CD PLoP comes as a zip file, which includes a variety of files. To put PLoP on a CD, you will need either plpbt.iso or plpbtnoemul.iso from that zip file. Either disc image should work on most computers, though if in doubt plpbtnoemul.iso should work “everywhere,” according to the readme included with PLoP Boot Manager. Burn plpbtnoemul.iso or plpbt.iso to a CD and then skip to the “booting PLoP Boot Manager” section. Put PLoP on a Floppy Disk If your computer is old enough to still have a floppy drive, then you will need to put the contents of the plpbt.img image file found in PLoP’s zip file on a floppy disk. To do this, we’ll use a freeware utility called RawWrite for Windows. We aren’t fortunate enough to have a floppy drive installed, but if you do it should be listed in the Floppy drive drop-down box. Select your floppy drive, then click on the “…” button and browse to plpbt.img. Press the Write button to write PLoP boot manager to your floppy disk. Booting PLoP Boot Manager To boot PLoP, you will need to have your CD or floppy drive boot with higher precedence than your hard drive. In many cases, especially with floppy disks, this is done by default. If the CD or floppy drive is not set to boot first, then you will need to access your BIOS’s boot menu, or the setup menu. The exact steps to do this vary depending on your BIOS – to get a detailed description of the process, search for your motherboard’s manual (or your laptop’s manual if you’re working with a laptop). In general, however, as the computer boots up, some important keyboard strokes are noted somewhere prominent on the screen. In our case, they are at the bottom of the screen. Press Escape to bring up the Boot Menu. Previously, we burned a CD with PLoP Boot Manager on it, so we will select the CD-ROM Drive option and hit Enter. If your BIOS does not have a Boot Menu, then you will need to access the Setup menu and change the boot order to give the floppy disk or CD-ROM Drive higher precedence than the hard drive. Usually this setting is found in the “Boot” or “Advanced” section of the Setup menu. If done correctly, PLoP Boot Manager will load up, giving a number of boot options. Highlight USB and press Enter. PLoP begins loading from the USB drive. Despite our BIOS not having the option, we’re now booting using the USB drive, which in our case holds an Ubuntu Live CD! This is a pretty geeky way to get your PC to boot from a USB…provided your computer still has a floppy drive. Of course if your BIOS won’t boot from a USB it probably has one…or you really need to update it. Download PLoP Boot Manager Download RawWrite for Windows Similar Articles Productive Geek Tips Create a Bootable Ubuntu 9.10 USB Flash DriveReinstall Ubuntu Grub Bootloader After Windows Wipes it OutCreate a Bootable Ubuntu USB Flash Drive the Easy WayBuilding a New Computer – Part 3: Setting it UpInstall Windows XP on Your Pre-Installed Windows Vista Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Boost Netbook Speed with an SD Card & ReadyBoost

    - by Matthew Guay
    Looking for a way to increase the performance of your netbook?  Here’s how you can use a standard SD memory card or a USB flash drive to boost performance with ReadyBoost. Most netbooks ship with 1Gb of Ram, and many older netbooks shipped with even less.  Even if you want to add more ram, often they can only be upgraded to a max of 2GB.  With ReadyBoost in Windows 7, it’s easy to boost your system’s performance with flash memory.  If your netbook has an SD card slot, you can insert a memory card into it and just leave it there to always boost your netbook’s memory; otherwise, you can use a standard USB flash drive the same way. Also, you can use ReadyBoost on any desktop or laptop; ones with limited memory will see the most performance increase from using it. Please Note:  ReadyBoost requires at least 256Mb of free space on your flash drive, and also requires minimum read/write speeds.  Most modern memory cards or flash drives meet these requirements, but be aware that an old card may not work with it. Using ReadyBoost Insert an SD card into your card reader, or connect a USB flash drive to a USB port on your computer.  Windows will automatically see if your flash memory is ReadyBoost capable, and if so, you can directly choose to speed up your computer with ReadyBoost. The ReadyBoost settings dialog will open when you select this.  Choose “Use this device” and choose how much space you want ReadyBoost to use. Click Ok, and Windows will setup ReadyBoost and start using it to speed up your computer.  It will automatically use ReadyBoost whenever the card is connected to the computer. When you view your SD card or flash drive in Explorer, you will notice a ReadyBoost file the size you chose before.  This will be deleted when you eject your card or flash drive. If you need to remove your drive to use elsewhere, simply eject as normal. Windows will inform you that the drive is currently being used.  Make sure you have closed any programs or files you had open from the drive, and then press Continue to stop ReadyBoost and eject your drive. If you remove the drive without ejecting it, the ReadyBoost file may still remain on the drive.  You can delete this to save space on the drive, and the cache will be recreated when you use ReadyBoost next time. Conclusion Although ReadyBoost may not make your netbook feel like a Core i7 laptop with 6GB of RAM, it will still help performance and make multitasking even easier.  Also, if you have, say, a memory stick and a flash drive, you can use both of them with ReadyBoost for the maximum benefit.  We have even noticed better battery life when multitasking with ReadyBoost, as it lets you use your hard drive less.  SD cards and thumb drives are relatively cheap today, and many of us have several already, so this is a great way to improve netbook performance cheaply. Similar Articles Productive Geek Tips Speed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup PageAsk the Readers: What are Your Computer’s Hardware Specs?Understanding Windows Vista Aero Glass RequirementsReplace Google Chrome’s New Tab Page with Speed Dial TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • Craig Mundie's video

    - by GGBlogger
    Timothy recently posted “Microsoft Shows Off Radical New UI, Could Be Used In Windows 8” on Slashdot. I took such grave exception to his post that I found it necessary to my senses to write this blog. We need to go back many years to the days of hand cranked calculators and early main frame computers. These devices had singular purposes – they were “number crunchers” used to make accounting easier. The front facing display in early mainframes was “blinken lights.” The calculators did provide printing – in the form of paper tape and the mainframes used line printers to generate reports as needed. We had other metaphors to work with. The typewriter was/is a mechanical device that substitutes for a type setting machine. The originals go back to 1867 and the keyboard layout has remained much the same to this day. In the earlier years the Morse code telegraphs gave way to Teletype machines. The old ASR33, seen on the left in this photo of one of the first computers I help manufacture, used a keyboard very similar to the keyboards in use today. It also generated punched paper tape that we generated to program this computer in machine language. Everything considered this computer which dates back to the late 1960s has a keyboard for input and a roll of paper as output. So in a very rudimentary fashion little has changed. Oh – we didn’t have a mouse! The entire point of this exercise is to point out that we still use very similar methods to get data into and out of a computer regardless of the operating system involved. The Altair, IMSAI, Apple, Commodore and onward to our modern machines changed the hardware that we interfaced to but changed little in the way we input, view and output the results of our computing effort. The mouse made some changes and the advent of windowed interfaces such as Windows and Apple made things somewhat easier for the user. My 4 year old granddaughter plays here Dora games on our computer. She knows how to start programs, use the mouse, play the game and is quite adept so we have come some distance in making computers useable. One of my chief bitches is the constant harangues leveled at Microsoft. Yup – they are a money making organization. You like Apple? No problem for me. I don’t use Apple mostly because I’m comfortable in the Windows environment but probably more because I don’t like Apple’s “Holier than thou” attitude. Some think they do superior things and that’s also fine with me. Obviously the iPhone has not done badly and other Apple products have fared well. But they are expensive. I just build a new machine with 4 Terabytes of storage, an Intel i7 Core 950 processor and 12 GB of RAMIII. It cost me – with dual monitors – less than 2000 dollars. Now to the chief reason for this blog. I’m going to continue developing software for as long as I’m able. For that reason I don’t see my keyboard, mouse and displays changing much for many years. I also don’t think Microsoft is going to spoil that for me by making radical changes to my developer experience. What Craig Mundie does in his video here:  http://www.ispyce.com/2011/02/microsoft-shows-off-radical-new-ui.html is explore the potential future of computer interfaces for the masses of potential users. Using a computer today requires a person to have rudimentary capabilities with keyboards and the mouse. Wouldn’t it be great if all they needed was hand gestures? Although not mentioned it would also be nice if computers responded intelligently to a user’s voice. There is absolutely no argument with the fact that user interaction with these machines is going to change over time. My personal prediction is that it will take years for much of what Craig discusses to come to a cost effective reality but it is certainly coming. I just don’t believe that what Craig discusses will be the future look of a Window 8.

    Read the article

  • SharePoint 2007 Hosting :: How to Move a Document from One Lbrary to Another

    - by mbridge
    Moving a document using a SharePoint Designer workflow involves copying the document to the SharePoint document library you want to move the document to, and then deleting the document from the current document library it is in. You can use the Copy List Item action to copy the document and the Delete item action to delete the document. To create a SharePoint Designer workflow that can move a document from one document library to another: 1. In SharePoint Designer 2007, open the SharePoint site on which the document library that contains the documents to move is located. 2. On the Define your new workflow screen of the Workflow Designer, enter a name for the workflow, select the document library you want to attach the workflow to (this would be a document library containing documents to move), select Allow this workflow to be manually started from an item, and click Next. 3. On the Step 1 screen of the Workflow Designer, click Actions, and then click More Actions from the drop-down menu. 4. On the Workflow Actions dialog box, select List Actions from the category drop-down list box, select Copy List Item from the actions list, and click Add. The following text is added to the Workflow Designer: Copy item in this list to this list 5. On the Step 1 screen of the Workflow Designer, click the first this list (representing the document library to copy the document from) in the text of the Copy List Item action. 6. On the Choose List Item dialog box, leave Current Item selected, and click OK. 7. On the Step 1 screen of the Workflow Designer, click the second this list (representing the document library to copy the document to) in the text of the Copy List Item action, and select the document library (this is the document library to where you want to move the document) from the drop-down list box that appears. 8. On the Step 1 screen of the Workflow Designer, click Actions, and then click More Actions from the drop-down menu. 9. On the Workflow Actions dialog box, select List Actions from the category drop-down list box, select Delete Item from the actions list, and click Add. The following text is added to the Workflow Designer: then Delete item in this list 10. On the Step 1 screen of the Workflow Designer, click this list in the text of the Delete Item action. 11. On the Choose List Item dialog box, leave Current Item selected and click OK. The final text for the workflow should now look like: Copy item in DocLib1 to DocLib2   then Delete item in DocLib1 where DocLib1 is the SharePoint document library containing the document to move and DocLib2 the document library to move the document to. 12. On the Step 1 screen of the Workflow Designer, click Finish. How to Test the Workflow? 1. Go to the SharePoint document library to which you attached the workflow, click on a document, and select Workflows from the drop-down menu. 2. On the Workflows page, click the name of your SharePoint Designer workflow. 3. On the workflow initiation page, click Start.

    Read the article

  • Mac OS Snow Leopard freezes

    - by lydonchandra
    I got this error: Timexxxx In 'CFPasteboardCopyData', file /SourceCache/CF/CF-550.13/AppServices.subproj/CFPasteboard.c, line 1951, during lock, spin lock 0x15457a0c8 has value 0xf0000000, which is neither locked nor unlocked. The memory has been smashed. My Mac freezes except the mouse and keyboard. What is happening and how can I prevent this from happening in the future?

    Read the article

  • Explanation of the init.d/scripts Fedora

    - by Shahmir Javaid
    Below is a copy of vsftpd, i need some explanations of some of the scripts mentioned below in this script: #!/bin/bash # ### BEGIN INIT INFO # Provides: vsftpd # Required-Start: $local_fs $network $named $remote_fs $syslog # Required-Stop: $local_fs $network $named $remote_fs $syslog # Short-Description: Very Secure Ftp Daemon # Description: vsftpd is a Very Secure FTP daemon. It was written completely from # scratch ### END INIT INFO # vsftpd This shell script takes care of starting and stopping # standalone vsftpd. # # chkconfig: - 60 50 # description: Vsftpd is a ftp daemon, which is the program \ # that answers incoming ftp service requests. # processname: vsftpd # config: /etc/vsftpd/vsftpd.conf # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network RETVAL=0 prog="vsftpd" start() { # Start daemons. # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 if [ -d /etc/vsftpd ] ; then CONFS=`ls /etc/vsftpd/*.conf 2>/dev/null` [ -z "$CONFS" ] && exit 6 for i in $CONFS; do site=`basename $i .conf` echo -n $"Starting $prog for $site: " daemon /usr/sbin/vsftpd $i RETVAL=$? echo if [ $RETVAL -eq 0 ]; then touch /var/lock/subsys/$prog break else if [ -f /var/lock/subsys/$prog ]; then RETVAL=0 break fi fi done else RETVAL=1 fi return $RETVAL } stop() { # Stop daemons. echo -n $"Shutting down $prog: " killproc $prog RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog return $RETVAL } # See how we were called. case "$1" in start) start ;; stop) stop ;; restart|reload) stop start RETVAL=$? ;; condrestart|try-restart|force-reload) if [ -f /var/lock/subsys/$prog ]; then stop start RETVAL=$? fi ;; status) status $prog RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|try-restart|force-reload|status}" exit 1 esac exit $RETVAL Question I What the hell is the difference between the && and || signs in the below commands, and is it just an easy way to do a simple if check or is it completely different to a if[..something..]; then ..something.. fi: # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 Question II i get what -eq and -gt is (equal to, greater than) but is there a simple website that explains what -x, -d and -f are? Any help would be apreciated Running Fedora 12 on my OS. Script copied from /etc/init.d/vsftpd Question III It says required starts are $local_fs $network $named $remote_fs $syslog but i cant see any where it checks for those.

    Read the article

  • Backing up MySQL DB wtih mixture of innodb and myisam tables

    - by madphp
    I have a large database (almost 1GB) and it has a mixture of innodb and myisam tables. Does anyone have any general tips when backing it up or more specifically the commands i should send to mysqldump. I see that i should lock myisam tables, and that single transactions for innodb, but what if i have both. Also, what is actually happening when i lock an entire (very big) table on a production database.

    Read the article

  • Change Logon DPI setting in Windows 8.1

    - by jmc302005
    I love how M$ keeps making decisions for me about how I want my desktop to look. Now they have added per-user dpi settings. The problem this has created is that there is no adjustable dpi setting for the Lock/Logon screen. Let me explain. you can change the dpi setting to be the same across all displays and this does affect the icons and font on the lock/logon screen. However it does not affect any app/program that can run on the lock/logon screen. Ex. I use a 44" flat screen tv for my monitor on my desktop. Big enough for me to sit in my recliner and use my comp. But I don't have a wireless keyboard. And it sucks having the wire from the keyboard running across the floor. Plus I really don't want to keep a keyboard next to me. So I use the on screen keyboard for logging in and quick typing (search, web address, etc.) So the problem is that with the new dpi setup my on screen keyboard takes up nearly half the screen. Does M$ think we are all blind? Oh no I remember they think desktops should look like tablets and phones. I tried looking through the registry to see if I could find a setting for it. In the key HKEY_USERS.DEFAULT\Control Panel\Desktop there is a string value named "LogicalDPIOverride" with a value of -1. I have a feeling this is where I can fix the issue. I tried changing the value to 0 and to 1 with no change in the result. Instead I noticed that after logging out and back in the -1 value was back in the registry. So now M$ has also added a way for us to not be able to change a setting in the registry. They are making it harder and harder for us power users to be able to do anything with the settings in windows. Soon we will all have the same exact Windows with absolutely no customization. ok sorry for the quick rant. The real question here is. How can I change this defualt dpi crap? Can I use the LogPixels string that worked for dpi in Windows 7? Here are 2 Screen shots 1 of the Lock Screen and 1 of the Logon Screen http://i.imgur.com/6RM5ufE.jpg http://i.imgur.com/cnY5bmm.jpg Please any help will be appreciated.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >