Search Results

Search found 139102 results on 5565 pages for 'one time password'.

Page 86/5565 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Creating DOS bootable cd

    - by dotnetdev
    I need to burn a cd which I will boot from to reset a password on Windows Server. I am using Active Password Changer, but I get an error like so: http://www.freeimagehosting.net/image.php?faa4737491.png How can I create a "DOS bootable disk"? I have an ISO on the cd I thought that will work. The manual says I need DOS system files, where do I get these from? Thanks

    Read the article

  • disable possword policy using command prompt in Server 2008

    - by user50273
    Is there a way to disable password policy in Windows Server 2008 using command prompt. I know how to do it using Local Security Policy in Administrative Tools. I was wondering if there is a way to change using command prompt. I guess there must be some registry settings that needs to be changed but I do not know which entry in registry will disable the password policy. If you can tell me which registry entry I can write the command prompt myself. Thanks

    Read the article

  • htaccess allow if does not contain string

    - by Tom
    I'm trying to setup a .htaccess file which will allow users to bypass the password block if they come from a domain which does not start with preview. e.g. http://preview.example.com would trigger the password and http://example.com would not. Here's what I've got so far: SetEnvIfNoCase Host preview(.*\.)? preview_site AuthUserFile /Users/me/.htpasswd AuthGroupFile /dev/null AuthType Basic AuthName "Development Area" Require valid-user Order deny,allow Allow from 127 deny from env=preview_site Satisfy any Any ideas?

    Read the article

  • Ameristar Wins with Oracle GoldenGate’s Heterogeneous Real-Time Data Integration

    - by Irem Radzik
    Today we announced a press release about another successful project with Oracle GoldenGate. This time at Ameristar. Ameristar is a casino gaming company and needed a single data integration solution to connect multiple heterogeneous systems to its Teradata data warehouse. The project involves integration of Ameristar’s promotional and gaming data from 14 data sources across its 7 casino hotel properties in real time into a central Teradata data warehouse. The source systems include the Aristocrat gaming and MGT promotional management platforms running on Microsoft SQL Server 2000 databases. As you can notice, there was no Oracle Database involved in this project, but Ameristar’s IT leadership knew that  GoldenGate’s strong heterogeneous and real-time data integration capabilities is the right technology for their data warehousing project. With GoldenGate Ameristar was able to reduce data latency to the enterprise data warehouse, and use this real-time customer information for marketing teams in improving overall customer experience. Ameristar customers receive more targeted and timely campaign offers, and the company has more up-to-date visibility into financial metrics of the company. One other key benefit the company experienced with GoldenGate is in operational costs. The previous data capture solution Ameristar used was trigger based and required a lot of effort to manage. They needed dedicated IT staff to maintain it. With GoldenGate, the solution runs seamlessly without needing a fully-dedicated staff, giving the IT team at Ameristar more resources for their other IT projects. If you want to learn more about GoldenGate and the latest features for Oracle Database and non-Oracle databases, please watch our on demand webcast about Oracle GoldenGate 11g Release 2.

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • Startup/Shutdown time in Xubuntu is increasing!

    - by Ankit
    I am a novice Xubuntu user on a dual-boot machine. The other OS I have is Windows 7. When I first began using Xubuntu, I had really fast startup and shutdown (much much faster than Windows 7 :) ). However, as I started using it more and more for my work, these times started rising. I do not have any problems with execution speed of running applications. My main concern is the shutdown time. Now it has gone above Windows shutdown time [startup time has only partially increase compared to shutdown]. I checked some similar questions like this. However, they seem to not answer my concern as I feel that the concerned users there experience a long wait before the screen goes blue. In my case, the screen goes blue (desktop session ends a blue screen with a moving slider appears) pretty fast. However, it remains blue for a long time. Another answer that I saw on google was to use dmesg and then stopping some services that I do not want. However, me being a novice could not completely understand what it meant

    Read the article

  • login takes long time

    - by Arkaprovo Bhattacharjee
    I am using Ubuntu 12.04 from past 12 days. In the beginning login was fast enough after I put the password it hardly takes 3 to 4 sec to enter in desktop, but now its taking like more that 40 sec to show desktop after entering password. whats the problem, is there any solution? P.S there is only two programs (psensor and jupiter) that starts automatically after login. boot.log fsck from util-linux 2.20.1 /dev/sda6: clean, 254544/3325952 files, 2133831/13285632 blocks * Stopping Userspace bootsplash[164G[ OK ] * Stopping Flush boot log to disk[164G[ OK ] * Starting mDNS/DNS-SD daemon[164G[ OK ] Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox * Starting bluetooth daemon[164G[ OK ] * Starting network connection manager[164G[ OK ] * Starting AppArmor profiles [170G [164G[ OK ] * Stopping System V initialisation compatibility[164G[ OK ] * Starting CUPS printing spooler/server[164G[ OK ] * Starting System V runlevel compatibility[164G[ OK ] * Starting Bumblebee supporting nVidia Optimus cards[164G[ OK ] * Starting LightDM Display Manager[164G[ OK ] * Starting save kernel messages[164G[ OK ] * Starting anac(h)ronistic cron[164G[ OK ] * Starting ACPI daemon[164G[ OK ] * Starting regular background program processing daemon[164G[ OK ] * Starting deferred execution scheduler[164G[ OK ] speech-dispatcher disabled; edit /etc/default/speech-dispatcher * Starting CPU interrupts balancing daemon[164G[ OK ]

    Read the article

  • Merge two different API calls into One

    - by dhilipsiva
    I have two different apps in my django project. One is "comment" and an other one is "files". A comment might save some file attached to it. The current way of creating a comment with attachments is by making two API calls. First one creates an actual comment and replies with the comment ID which serves as foreign key for the Files. Then for each file, a new request is made with the comment ID. Please note that file is a generic app, that can be used with other apps too. What is the cleanest way of making this into one API call? I want to have this as a single API call because I am in a situation where I need to send user an email with all the files as attachment when a comment is made. I know Queueing is the ideal way to do it. But I don't have the liberty to add queing to our stack now. So this was the only way I could think of.

    Read the article

  • Upgrade to 12.04 results to empty Dash, no date & time either on the top panel

    - by Nicolas
    I've upgraded from Ubuntu netbook remix something to 12.04 LTS, and I've got two issues. (Got an Asus eeePc 32bits, Intel 945GME x86/MMX/SSE2 and Intel Atom CPU N270 @ 1.6Ghz x2) Nothing in the Dash. Only the "home" tab, other tabs are missing. No search results whatsoever. Missing elements in the system panel, privacy and date & time. No date & time on the right corner either. I've tried to reset unity with the terminal but the process was a whole mess full of errors. It did show date & time in the system panel (not on the top-right corner) while the process was going on in the terminal. But then it was such a mess (no more icons on the right corner amongst other things), and the process wouldn't complete, so I had to reboot the computer and get Unity as before, still no date & time and privacy.

    Read the article

  • Dual monitors with one above the other?

    - by Felix
    I'm using Gnome 3 and proprietary Nvidia drivers. I have tried to set in nvidia-settings my external monitor to be "above" my main one (it's a laptop). However, when I try to drag a window up from the main display to the external one, it gets stuck and can't move past a certain point. Trying to maximize it changes its decoration so it looks maximized (i.e. no borders, etc), but its size or position doesn't change. Now, if I set my external monitor to be "to the left" of the main one, it works, which is why I'm suspecting this is a Gnome issue, not an Nvidia one. Anyone know how to fix this? Update: some versions: Gnome: 3.2.2.1 Nvidia: 280.13 Update 2: I can see that Gnome 3.4 is out, and among the release notes is better external monitor support. However, they only mention a small fix that is unrelated to my problem. Can anyone with Gnome 3.4 and access to an external monitor please test this out and tell me if it works? I don't want to go through the hassle of upgrading my Ubuntu installation unless I know for certain it's going to fix the problem.

    Read the article

  • Internet slow on one router only [the problem only in Ubuntu] [on hold]

    - by mrSuperEvening
    Internet works perfectly on every other router, but browsing sucks at home (slow browsing and slow loading times). I changed DNS servers to 8.8.0.0, still doesn't help. And funnily, download speed is extremely high on this network (meaning torrents for example), but using browsers and loading websites is extremely slow (only on this network). Do I need to change something in router settings or what can I try? By the way, I use wired connection to router. EDIT: There's no problems when using Windows. EDIT: ifconfig: eth0 Link encap:Ethernet HWaddr f2:4d:a0:c0:3f:4c inet addr:192.168.11.8 Bcast:192.168.11.255 Mask:255.255.255.0 inet6 addr: fe80::f24d:a2ff:fec6:3f4c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:206798 errors:0 dropped:0 overruns:0 frame:0 TX packets:219570 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76680734 (76.6 MB) TX bytes:21738160 (21.7 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:160 errors:0 dropped:0 overruns:0 frame:0 TX packets:160 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11094 (11.0 KB) TX bytes:11094 (11.0 KB)` ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (213.159.32.147) 56(84) bytes of data. 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=1 ttl=61 time=0.936 ms 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=2 ttl=61 time=0.937 ms --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.936/0.936/0.937/0.030 ms

    Read the article

  • View the Time & Date in Chrome When Hiding Your Taskbar

    - by Asian Angel
    Do you prefer keeping your Taskbar hidden but still need to keep watch on what time it is? Now you can keep track of the time without the Taskbar using the Date Today extension for Google Chrome. A Look at Date Today with Different Themes This extension does one thing and does it well…it provides you with an “active icon” clock that will let you view the time and date in two fashions. The first is by hovering your mouse over the “Toolbar Clock Button”… And the second is by clicking on the “Toolbar Clock Button” to view an enlarged version. Here you can see the extension in use with five different themes to get an idea of how it might look with the theme that you are currently using. It does stand out very nicely with brighter or darker colored themes. Conclusion While this extension is obviously not for everyone it will make a nice (and useful) addition to Chrome for those who prefer keeping their Taskbar hidden. Links Download the Date Today extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Set the Date and Time on SolarisView Browser History Based on Host & Date in ChromeQuick Tip: Set a Future Date for a Post in WordPressFuture Date a Post in Windows Live WriterSave Screen Space by Hiding the Bookmarks Toolbar in Safari for Windows TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Discovery Channel LIFE Theme (Win7) Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • how to calculate intersection time and place of multiple moving arcs

    - by user20733
    I have rocks orbiting moons, moons orbiting planets, planets orbiting suns, and suns orbiting black holes, and the current system could have many many layers of orbitage. the position of any object is a function of time and relative to the object it orbits. (so far so good). now I want to know for a given 2 objects(A,B), a start time and a speed, how can I work out the when and where to go. I can work out where A and B is given a time. so i just need. 1: direction to travel in from A to B(remember B is moving(not in a straight line)) 2: Time to get to b in a straight line. travel must be in a straight line with the shortest possible distance. as an extension to this question, how will i know if its better to wait, EG is it faster to stay on object A and wait for a hour when the objects may be closer, than to set off from A to B at the start. Cheers, it hurt my brain.

    Read the article

  • Symfony 1.4 - Don't save a blank password on a executeUpdate action.

    - by Twelve47
    I have a form to edit a UserProfile which is stored in mysql db. Which includes the following custom configuration: public function configure() { $this->widgetSchema['password']=new sfWidgetFormInputPassword(); $this->validatorSchema['password']->setOption('required', false); // you don't need to specify a new password if you are editing a user. } When the user tries to save the executeUpdate method is called to commit the changes. If the password is left blank, the password field is set to '', but I want it to retain the old password instead of overwriting it. What is the best (/most in the symfony ethos) way of doing this? My solution was to override the setter method on the model (which i had done anyway for password encryption), and ignore blank values. public function setPassword( $password ) { if ($password=='') return false; // if password is blank don't save it. return $this->_set('password', UserProfile ::encryptPassword( $password )); } It seems to work fine like this, but is there a better way? If you're wondering I cannot use sfDoctrineGuard for this project as I am dealing with a legacy database, and cannot change the schema.

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • PASS Summit – looking back on my first time

    - by Fatherjack
      So I was lucky enough to get my first experience of PASS Summit this year and took some time beforehand to read some blogs and reference material to get an idea on what to do and how to get the best out of my visit. Having been to other conferences – technical and non-technical – I had a reasonable idea on the routine and what to expect in general. Here is a list of a few things that I have learned/remembered as the week has gone by. Wear comfortable shoes. This actually needs to be broadened to Take several pairs of comfortable shoes. You will be spending many many hours, for several days one after another. Having comfortable feet that can literally support you for the duration will make the week in general a whole lot better. Not only at the conference but getting to and from you could well be walking. In the evenings you will be walking around town and standing talking in various bars and clubs. Looking back, on some days I was on my feet for over 20 hours. Make friends. This is a given for the long term benefits it brings but there is also an immediate reward in being at a conference with a friend or two. Some events are bigger and more popular than others and some have the type of session that every single attendee will want to be in. This is great for those that get in but if you are in the bathroom or queuing for coffee and you miss out it sucks. Having a friend that can get in to a room and reserve you a seat is a great advantage to make sure you get the content that you want to see and still have the coffee that you need. Don’t go to every session you want to see This might sound counter intuitive and it relies on the sessions being recorded in some way to guarantee you don’t totally miss out. Both PASS Summit and SQL Bits sessions are recorded (summit is audio, SQLBits is video) and this means that if you get into a good conversation with someone over a coffee you don’t have to break it up to go to a session. Obviously there is a trade-off here and you need to decide on the tipping point for yourself but a conversation at a place like this could make a big difference to the next contract or employer you have or it might simply be great catching up with some friends you don’t see so often. Go to at least one session you don’t want to Again, this will seem to be contrary to normal logic but there is no reason why you shouldn’t learn about a part of SQL Server that isn’t part of your daily routine. Not only will you learn something new but you will also pick up on the feelings and attitudes of the people in the session. So, if you are a DBA, head off to a BI session and so on. You’ll hear BI speakers speaking to a BI audience and get to understand their point of view and reasoning for making the decisions they do. You will also appreciate the way that your decisions and instructions affect the way they have to work. This will help you a lot when you are on a project, working with multiple teams and make you all more productive. Socialise While you are at the conference venue, speak to people. Ask questions, be interested in whoever you are speaking to. You get chances to talk to new friends at breakfast, dinner and every break between sessions. The only people that might not talk to you would be speakers that are about to go and give a session, in most cases speakers like peace and quiet before going on stage. Other than that the people around you are just waiting for someone to talk to them so make the first move. There is a whole lot going on outside of the conference hours and you should make an effort to join in with some of this too. At karaoke evenings or just out for a quiet drink with a few of the people you meet at the conference. Either way, don’t be a recluse and hide in your room or be alone out in the town. Don’t talk to people Once again this sounds wrong but stay with me. I have spoken to a number of speakers since Summit 2013 finished and they have all mentioned the time it has taken them to move about the conference venue due to people stopping them for a chat or to ask a question. 45 minutes to walk from a session room to the speaker room in one case. Wow. While none of the speakers were upset about this sort of delay I think delegates should take the situation into account and possibly defer their question to an email or to a time when the person they want is clearly less in demand. Give them a chance to enjoy the conference in the same way that you are, they may actually want to go to a session or just have a rest after giving their session – talking for 75 minutes is hard work, taking an extra 45 minutes right after is unbelievable. I certainly hope that they get good feedback on their sessions and perhaps if you spoke to a speaker outside a session you can give them a mention in the ‘any other comments’ part of the feedback, just to convey your gratitude for them giving up their time and expertise for free. Say thank you I just mentioned giving the speakers a clear, visible ‘thank you’ in the feedback but there are plenty of people that help make any conference the success it is that would really appreciate hearing that their efforts are valued. People on the registration desk, volunteers giving schedule guidance and directions, people on the community zone are all volunteers giving their time to help you have the best experience possible. Send an email to PASS and convey your thoughts about the work that was done. Maybe you want to be a volunteer next time so you could enquire how you get into that position at the same time. This isn’t an exclusive list and you may agree or disagree with the points I have made, please add anything you think is good advice in the comments. I’d like to finish by saying a huge thank you to all the people involved in planning, facilitating and executing the PASS Summit 2013, it was an excellent event and I know many others think it was a totally worthwhile event to attend.

    Read the article

  • from svn to git (+ LDAP + password-less updates + passworded access control)

    - by Jayen
    We have an SVN setup and there are some things we dislike about it and some things we like about it. We want to move to git, but we're not sure exactly what setup will work for us. We're currently using SVN (w/ Authz) + Apache (w/ WebDAV & LDAP). Hook to update the live site [like] Live site update requires no additional interaction [like] Live site update uses stored password [dislike] Commits require centralized-password authentication [like] Commit from live site changes stored credentials [dislike] Access control (per repository) for commits [like] Point 5 above is the one that keeps stuffing us up. Someone makes a commit from the live site and then the hook breaks. We're thinking to use gitosis/gitolite to get access control, but as they use ssh keys, we won't be requiring passwords. We're also thinking to use git-http-backend, and use Apache for authentication, but then do we lose access control? Can the live site be automatically updated from a hook if Apache requires authentication? Can we combine git-http-backend and gitosis/gitolite somehow? Can we store http credentials with git?

    Read the article

  • MySQL password changed every other day in windows?

    - by PHP
    Every morning when I check server status,I will find MySQL's password is changed: mysql -uuser -ppassword will report ERROR 1045 (28000): Access denied for user 'user'@'localhost' (using password: YES) And then I restart server,and when it's up,MySQL will be back to normal. It has now become a routinely job. What can be the cause for this? How can I know what's exactly happening to MySQL? Here is the error log: 100122 10:11:16 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100122 10:11:16 InnoDB: Starting shutdown... 100122 10:11:18 InnoDB: Shutdown completed; log sequence number 0 22939338 100122 10:11:18 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100122 10:12:40 InnoDB: Started; log sequence number 0 22939338 100122 10:12:42 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL) 100123 16:20:44 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100123 16:20:44 InnoDB: Starting shutdown... 100123 16:20:46 InnoDB: Shutdown completed; log sequence number 0 22939832 100123 16:20:46 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100123 16:22:09 InnoDB: Started; log sequence number 0 22939832 100123 16:22:11 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL) 100125 9:18:59 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100125 9:18:59 InnoDB: Starting shutdown... 100125 9:19:00 InnoDB: Shutdown completed; log sequence number 0 22941001 100125 9:19:00 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100125 9:20:22 InnoDB: Started; log sequence number 0 22941001 100125 9:20:25 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL)

    Read the article

  • The Network folder specified is currently mapped using a different user name and password

    - by Frank Thornton
    I have a NAS device, it has 3 shares. On one computer I have access to all 3 of the shares. On another computer I keep getting this error when try and add a 2nd one. The Network folder specified is currently mapped using a different user name and password [...] That is the message I keep getting. What causes that? EDIT: Every share has it's own username and password. EDIT: NET USE on the one running 3 from the same NAS device New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK T: \\192.168.2.5\SHARE1 Microsoft Windows Network OK X: \\Nas-1dsho-abc\SHARE2 Microsoft Windows Network Disconnected Y: \\192.168.2.9\backups Microsoft Windows Network OK Z: \\Nas-1dsho-abc\cbackups Microsoft Windows Network The command completed successfully. NET USE on the other: New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Y: \\192.168.2.5\SHARE1 Microsoft Windows Network Unavailable Z: \\192.168.2.5\SHARE2 Microsoft Windows Network The command completed successfully.

    Read the article

  • apache using mod_auth_kerb always asks for the password twice

    - by DrStalker
    (Debian Squeeze) I'm trying to set apache up to use Kerberos authentication to allow AD users to log in. It is working, but prompts the user twice for a username and password, with the first time being ignored (no matter what is put it in.) Only the second prompt includes the AuthName string from the config (i.e.: the first windows is a generic username/password one, the second includes the title "Kerberos Login") I'm not worried about integrated windows authentication working at this stage, I just want users to be able to login with their AD account so we don't need to set up a second repository of user accounts. How do I fix this to eliminate that first useless prompt? The directives in the apache2.conf file: <Directory /var/www/kerberos> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms ONEVUE.COM.AU.LOCAL Krb5KeyTab /etc/krb5.keytab KrbServiceName HTTP/[email protected] require valid-user </Directory> krb5.conf: [libdefaults] default_realm = ONEVUE.COM.AU.LOCAL [realms] ONEVUE.COM.AU.LOCAL = { kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL master_kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL admin_server = SYD01PWDC01.ONEVUE.COM.AU.LOCAL default_domain = ONEVUE.COM.AU.LOCAL } [login] krb4_convert = true krb4_get_tickets = false The access log when accessing the secured directory (note the two seperate 401's) 192.168.10.115 - - [24/Aug/2012:15:52:01 +1000] "GET /kerberos/ HTTP/1.1" 401 710 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - - [24/Aug/2012:15:52:06 +1000] "GET /kerberos/ HTTP/1.1" 401 680 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - [email protected] [24/Aug/2012:15:52:10 +1000] "GET /kerberos/ HTTP/1.1" 200 375 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" And one line in error.log [Fri Aug 24 15:52:06 2012] [error] [client 192.168.0.115] gss_accept_sec_context(2) failed: An unsupported mechanism was requested (, Unknown error)

    Read the article

  • Shut Out of XP - No Admin Password or CDR

    - by ashes999
    I inherited an old WinXP/Linux dual-boot machine from the stoneage. Because it has Linux, the regular boot process is replaced with the Fedora boot loader; I cannot, therefore, press F8 strategically to tell my PC to boot from CD. Even if I could, it's a moot point; the CDR doesn't seem to recognize any CDs. To make things worse, there's no option to network boot. The original user is probably long gone; I don't know the password for any of the Administrator group users. I can login using my corp account, but that's unprivileged on this machine. Since I'm not an admin, I can't do crazy things, like looking at boot.ini. Or deleting files. I only have 500MB free on my C drive. I'm pretty sure I can't boot from a USB, since I didn't see any settings for this in my BIOS. How can I get admin access for my user? Edit: Things I've tried: Boot from CD (CD not recognized) Launch CD from XP (CD not recognized) Install Daemon Tools Lite so I can install from an ISO -- don't have admin privileges XP password recovery tool -- requires admin privileges Adding an admin user -- no access to Control Panel Users since I'm not an admin Logging in as both the admin users on the system (trying some standard passwords) Using Fedora to chntpw (the Fedora version installed is ancient -- 2.7)

    Read the article

  • Hadoop initscript askes password

    - by Ramesh
    I have installed hadoop on my ubuntu 12.04 single node .I am trying to execute an init script to make the hadoop run on start up but it asks password every time i execute. #!/bin/sh ### BEGIN INIT INFO # Provides: hadoop services # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Hadoop services # Short-Description: Enable Hadoop services including hdfs ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HADOOP_BIN=/home/naveen/softwares/hadoop-1.0.3/bin NAME=hadoop DESC=hadoop USER=naveen ROTATE_SUFFIX= test -x $HADOOP_BIN || exit 0 RETVAL=0 set -e cd / start_hadoop () { set +e su $USER -s /bin/sh -c $HADOOP_BIN/start-all.sh > /var/log/hadoop/startup_log case "$?" in 0) echo SUCCESS RETVAL=0 ;; 1) echo TIMEOUT - check /var/log/hadoop/startup_log RETVAL=1 ;; *) echo FAILED - check /var/log/hadoop/startup_log RETVAL=1 ;; esac set -e } stop_hadoop () { set +e if [ $RETVAL = 0 ] ; then su $USER -s /bin/sh -c $HADOOP_BIN/stop-all.sh > /var/log/hadoop/shutdown_log RETVAL=$? if [ $RETVAL != 0 ] ; then echo FAILED - check /var/log/hadoop/shutdown_log fi else echo No nodes running RETVAL=0 fi set -e } restart_hadoop() { stop_hadoop start_hadoop } case "$1" in start) echo -n "Starting $DESC: " start_hadoop echo "$NAME." ;; stop) echo -n "Stopping $DESC: " stop_hadoop echo "$NAME." ;; force-reload|restart) echo -n "Restarting $DESC: " restart_hadoop echo "$NAME." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" >&2 RETVAL=1 ;; esac exit $RETVAL Please tell me how to run hadoop without entering password.

    Read the article

  • Windows 7 Professional Cannot Connect to Share - Wrong password

    - by henryford
    I know that this question has actually been asked a few times before, but every solution I found didn't yield any results on my end, I can't get my head around it: When I am trying to connect to a share on the network, I always get the response "The specified network password is incorrect". However, the password is definetly correct and it works if I connect from another machine. I changed the LAN Manager authentication level to "Send LM & NTLM - use NTLMv2 session security if negiotated", I configured Kerberos encryption types to include all suites, rebooted (several times), but still - no luck. I can connect if I use my regular account with which I am logged in, but I need to connect with a different user since my log-in user has not enough privileges on the share. When I do that, the error above comes up. I'm really frustrated at the moment, this problem is driving me crazy. I'd be gladful for any possible solution to this. At the moment I'm using a workaround: I connect to a different machine via RDP, login with the user I have to use for the network-share connection and then I can map the drive and copy/paste from the RDP session to my local workstation. This is also working when I am connecting via RDP with my current login user and map the drive with the other user who has sufficent privileges. Tanks in advance, Thomas

    Read the article

  • Permission denied (publickey,gssapi-with-mic,password) ssh error

    - by zentenk
    Heads up I'm a noob with linux and networking. I set up a ubuntu server and I have a static ip for my network. When I try to connect to the server at home (external), it prompts me to log in. I supply the correct password (or incorrect pw), I get the error Permission denied, please try again. and after 3 times I get Permission denied (publickey,gssapi-with-mic,password) I am however able to connect with SSH from another computer in the same network with ssh < internal ip of server > I'm connecting with mac os x and my config file is vanilla. Note: During installation of ubuntu it says I don't have a default route or something while doing auto network configuration, but I ignored it and continued the installation, could this be the problem? EDIT: I have tried the below, I have nothing in hosts.allow and also iptables shows the ports that I have allowed, which is 22. I checked the auth.log, and there is nothing when I connect to it remotely (even when it says permission denied). I have tried connecting to it internally and the correct authentication logs show. Any idea whats wrong?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >