Search Results

Search found 3172 results on 127 pages for 'middle earth'.

Page 94/127 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • FastGate A20 Line And Himem.sys Issue With Updating BIOS

    - by Boris_yo
    I have been persistent with a thought to perform my first BIOS update ever through MS-DOS but have been postponing this task until today. Despite people telling me any bootable ISO will do it either through CD-ROM or RAMDRIVE, I am still having problems. First is the problem with CD-ROM driver trying to make it work with 4 driver files (cd1.SYS, cd2.SYS, cd3.SYS, cd4.SYS) as well as starting RAMDISK proved to be failure: CD-ROM XMS Allocation Error RAMDISK XMS Allocaton Error (X: and R: drives not working) This A20 line seemed to be the obstacle which then after a couple of searches pointed me to this article on Microsoft website. It seems that FastGate is the culprit which takes over A20 line and conflicts with himem.sys which should be handling it causing the driver to be unable to allocate memory resources. Albeit article suggests 2 workarounds which is disabling FastGate option or adding switch, I read that the former workaround could cause problems which involves later tinkering BIOS, disabling shadow copy etc. while the latter workaround can just hang system as stated in the link above. I assume it just hangs the boot process from image file though. Summing up the above, I am cautious and think it is risky to follow both workarounds because disabling FastGate or trying adding switch by trying available switches from 1-14 or 16, could crash the BIOS update process by itself. I could do this without the need for himem.sys with bootable USB thumbdrive by making it to be seen as USB-HDD, but some time ago I read that it is never a good idea to update BIOS from hard drive so even thought it is simulation, who knows... Maybe it will deactivate hard drive in the middle of the BIOS update process or even USB thumbdrive per se? One forum discussion was about updating BIOS and somebody suggested to not load himem.sys for some reason, but now that I think of it, what if BIOS update needs upper memory?

    Read the article

  • Server stops responding, can't find issue?

    - by Corey W
    I've had a pretty basic server up and running CentOS with webserver/database, and have noticed that it has locked up a few times in the middle of the night. It seems to happen randomly. When it locks up I can ssh in, (although it seems to hang once connected), but can't access cpanel/whm and have to reboot the server to get everything back up. Checking the messages log I see the below like clockwork every 5minutes 1 second, and then it just stops logging anything until I reboot. I can't seem to find any log showing any issue? Is there somewhere I can check to try to figure out what is happening? Could this be caused by CPU being maxed? Nov 17 08:01:35 s1 pure-ftpd: (__cpanel__service__auth__ftpd__Q13SKrtaCJCHjBezTfU8Iqmsi@127.0.0.1) [INFO] Logout. Nov 17 08:06:36 s1 pure-ftpd: ([email protected]) [INFO] New connection from 127.0.0.1 Nov 17 08:06:36 s1 pure-ftpd: ([email protected]) [INFO] __cpanel__service__auth__ftpd__mxidFBSnQXmR0QzqSxlqrXLIH0CmJ0GPh9bZ5V3 is now l ogged in Nov 17 08:06:37 s1 pure-ftpd: (__cpanel__service__auth__ftpd__mxidBDaCgnqSxlqrXLIH0CmJ0GPh9bZ5V3@127.0.0.1) [INFO] Logout. Nov 17 08:11:37 s1 pure-ftpd: ([email protected]) [INFO] New connection from 127.0.0.1 Nov 17 08:11:38 s1 pure-ftpd: ([email protected]) [INFO] __cpanel__service__auth__ftpd__T4B7F71acf1dsdJSeJHdqKNcbOdpzNnN_GttgcM is now l ogged in Nov 17 08:11:38 s1 pure-ftpd: (__cpanel__service__auth__ftpd__T4B7F71acf1KNcbOdpzNnN_GttgcM@127.0.0.1) [INFO] Logout. Nov 17 08:16:38 s1 pure-ftpd: ([email protected]) [INFO] New connection from 127.0.0.1 Nov 17 08:16:38 s1 pure-ftpd: ([email protected]) [INFO] __cpanel__service__auth__ftpd__W5C1RzumtaNwe4cU8Lt1 is now logged in Nov 17 08:16:38 s1 pure-ftpd: ([email protected]) [INFO] Logout. Nov 17 09:10:58 s1 kernel: imklog 4.6.2, log source = /proc/kmsg started. Nov 17 09:10:58 s1 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="1094" x-info="http://www.rsyslog.com"] (re)start Nov 17 09:10:58 s1 kernel: Initializing cgroup subsys cpuset

    Read the article

  • IPtables: DNAT not working

    - by GetFree
    In a CentOS server I have, I want to forward port 8080 to a third-party webserver. So I added this rule: iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination thirdparty_server_ip:80 But it doesn't seem to work. In an effort to debug the process, I added these two LOG rules: iptables -t mangle -A PREROUTING -p tcp --src my_laptop_ip --dport ! 22 -j LOG --log-level warning --log-prefix "[_REQUEST_COMING_FROM_CLIENT_] " iptables -t nat -A POSTROUTING -p tcp --dst thirdparty_server_ip -j LOG --log-level warning --log-prefix "[_REQUEST_BEING_FORWARDED_] " (the --dport ! 22 part is there just to filter out the SSH traffic so that my log file doesn't get flooded) According to this page the mangle/PREROUTING chain is the first one to process incomming packets and the nat/POSTROUTING chain is the last one to process outgoing packets. And since the nat/PREROUTING chain comes in the middle of the other two, the three rules should do this: the rule in mangle/PREROUTING logs the incomming packets the rule in nat/PREROUTING modifies the packets (it changes the dest IP and port) the rule in nat/POSTROUTING logs the modified packets about to be forwarded Although the first rule does log incomming packets comming from my laptop, the third rule doesn't log the packets which are supposed to be modified by the second rule. It does log, however, packets that are produced in the server, hence I know the two LOG rules are working properly. Why are the packets not being forwarded, or at least why are they not being logged by the third rule? PS: there are no more rules than those three. All other chains in all tables are empty and with policy ACCEPT.

    Read the article

  • SSH session becomes unresponsive when logged into Ubuntu Server virtual machine using VirtualBox

    - by nickbart
    Hi everyone, I'm really at my wits end here, so I'm hoping someone here can help me. I have a virtual machine running Ubuntu Server 9.10. It's just a small development environment so I can keep my code separate from the test and production environments. I am running it through VirtualBox 3.1.6 on a laptop running Ubuntu Desktop 9.10. I have it set up with a bridged network connection and it is bridged to my laptop's wireless adapter. We have no wired connections in this office. I boot up the VM and everything is fine. I can SSH into it using gnome-terminal and for a while everything is Kosher. Then seemingly randomly, the SSH terminal session with hang. No error message, nothing; it just becomes unresponsive. If I go to the VirtualBox terminal I find the VM itself is perfectly fine. It can ping and I can SSH out with it. If I restart the networking on the VM the SSH session in my gnome-terminal will most of the time become responsive again. Here's an interesting point, the SSH session will sometimes die right in the middle of me typing something (this points to it not being an idle session issue) and if I go to the VirtualBox terminal and restart the networking and then return to my gnome-terminal SSH session I find that it will come back to life and what I typed when the session hung originally will magically type itself in to the buffer. So, my input is getting stored somewhere and just can't make its way to the VM until the networking on the VM is restarted. I've tried different versions of VirtualBox and used vmdk images and vdi images and nothing seems to work. I can't tell if the problem is with my laptop, VirtualBox, or the Ubuntu Server VDI. Is there anyway to debug this issue? Or has anyone out there seen anything similar? Your help is much appreciated. Nick

    Read the article

  • BizTalk/MQ - DCOM was unable to communicate with the computer xxx using any of the configured protocols

    - by NealWalters
    I've read the other questions on this same error, but don't see a close match to this scenario. We got 18 of these last night between 12:17:13 and 12:39:37 on Win 2008/R2. It caused us to lose connectivity between BizTalk 2010 and WebSphere MQ machine. All our BizTalk machines (Prod, QA, Train, etc...) got the messages at the roughly same time and in the same quantity (about 18 of them). Computer xxx is the WebSphere MQ machine. What could cause this in the middle of the night? The servers are configured and running in Prod for a couple of years. There is no Win Firewall running, and servers are practically on the same rack. Could a run-away or 100% utilized CPU on the WebSphere MQ cause this issue? What else could cause it? BizTalk did NOT auto-recover from this situation. The above was followed by thousands of this message in the BizTalk event logs: The adapter "MQSeries" raised an error message. Details "Error encountered on Queue.Get Queue name = MyQManager/MyQueueName Reason code = 2354.". We restated BizTalk host instances, and it did not come back right away. It seemed we had to stop host instances for about two minutes, then start them.

    Read the article

  • My Windows 7 has suddenly stopped displaying Unicode symbols

    - by Felix Dombek
    For some strange reason, my computer suddenly doesn't show certain unicode characters anymore! I have no idea what happened. Affected applications include Windows Explorer (should be Japanese characters), Google Chrome (should be a heart), and Winamp (should be stars): Russian, German etc. characters are displayed normally. Chrome also displays Japanese script on websites, but not in the GUI. How can I fix it? Update: I have tried to use System Restore to fix it. I needed to go back in time quite a while because the most recent restore points didn't solve it so I used one from the middle of November. After that restore, Unicode symbols were displayed again. Then I updated my system with Windows Update again because those were removed during the restore. After that, the error occurred again! I then did a restore to a point before my new updates, but the error persists, and the old restore point (which I used before) is gone and there are currently no other snapshots of the system. Any suggestions on what to do now? Update 2: I could find a workaround: Control Panel ? Region and Language ? Administration ? Change Language for Unicode-incompatible programs to Japanese (Japan). All mentioned programs display their symbols correctly again. However, I don't consider this a fix because these programs are not usually Unicode-incompatible, and it also leads to some (non-serious) artifacts in some programs. I still welcome an answer that tells me what went wrong here and how to fix the issue.

    Read the article

  • Laptop Randomly Turning On and Off

    - by Ian Mallett
    So, I have a pretty new laptop, and one of its quirks is that, at random times (though typically in the middle of the night), it seems to wake up from sleep mode, churn a bit, and then go back into sleep mode. I write "seems" because its fans are very loud, so it's obvious when it's not asleep, but during the time it is "on", I can't see anything on the screen. I have researched the problem somewhat, and could only find similar issues; nothing identical. In those cases, it appeared that certain devices could be responsible. Nothing is plugged into my computer during this behavior, but I nonetheless disabled every device's permission to wake the computer through the device manager. This included disabling the magic packet wake for the network (despite its only having a wireless connection). Using "powercfg /lastwake" gives an empty wake history. But, I also went through all the tasks and checked if they would wake the computer. None appeared to. The problem persisted, so, after some more research, I found this, and executed it for all power schemes on the computer. The problem persists. System: OS: Windows 7 Professional CPU: Intel 990X GPU: NVIDIA GeForce 580M/12GB RAM Motherboard: Clevo X7200 Model: NP7282-S1 (Sager-built laptop)

    Read the article

  • Selectively allow unsafe html tags in Plone

    - by dhill
    I'm searching for a way to put widgets from several services (PicasaWeb, Yahoo Pipes, Delicious bookmarks, etc.) on the community site I host on Plone (currently 3.2.1). I'm looking for a way to allow a group of users to use dangerous html tags. There are some ways I see, but I don't know how to implement those. One would be changing safe_html for the pages editors own (1). Another would be to allow those tags on some subtree (2). And yet another finding an equivalent of "static text portlet" that would display in the middle panel (3). We could then use some of the composite products (I stumbled upon Collage and CMFContentPanels), to include the unsafe content on other sites. My site has been ridden by advert bots, so I don't want to remove the filtering all together. I don't have an easy (no false positives) way of checking which users are bots, so deploying captcha now wouldn't help either. The question is: How to implement any of those solutions? (I already asked that on plone mailing list without an answer, so I thought I would give it another try here.)

    Read the article

  • Most cost efficient way to backup Subversion data to S3?

    - by sludge
    I'm looking at using S3 as an offsite backup repo for my Subversion database. When I dump my SVN database, it's about 10 gigabytes. I would like to avoid the charge of uploading that data repeatedly. The anatomy of this large file such that new changes to Subversion modify the tail of the file, with everything else staying the same. Because Amazon S3 does not allow you to "patch" files with changes, I will have to upload ten gigs every time I instantiate a backup after doing a simple submit to Subversion. Here are the options as I see them: Option 1 I am looking at duplicity which has --volsize which splits data over an amount of megs. Is it possible to split the Subversion dumps using this so further incremental backups are measured in megabytes? Option 2 Can I just backup the hot subversion repository? This seems like a bad idea if it is in the middle of writing a submit. However, I have the option of taking the repo offline between the hours of midnight and 4am. Each revision in my Berkeley DB uses a file as its record.

    Read the article

  • Where to put the SPF TXT record?

    - by YellowSquirrel
    I've set up Google apps for my domain: I've registered the domain with Google by adding the CNAME Google asked and I've apparently succesfully setup the MX Google mail servers. So far I haven't yet a dedicated server: I'm just having a domain at a registrar. Now I want to activate SPF and I'm confused. In the following short webpage: http://www.google.com/support/a/bin/answer.py?answer=178723 it is written that I must add a TXT record containing: v=spf1 include:_spf.google.com ~all Where should I enter this? Should this go in the zone (?) file, like I did for the CNAME and the MX records? So far I have something like this: @ 10800 IN A 217.42.42.42 @ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM. @ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM. @ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 1 ASPMX.L.GOOGLE.COM. google8a70835987f31e34 10800 IN CNAME google.com. Does adding the SPF TXT record mean I should literally have something like that: @ 10800 IN A 217.42.42.42 @ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM. @ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM. @ 3600 IN TXT "v=spf1 include:_spf.google.com ~all" @ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM. @ 10800 IN MX 1 ASPMX.L.GOOGLE.COM. google8a70835987f31e34 10800 IN CNAME google.com. I made that one up and included right in the middle to show how confused I am. What I'd like to know is the exact syntax and where/how I should put this TXT record.

    Read the article

  • windows 7 turns off itself everyday at about 9am

    - by Radek
    I was given this comp at work and after a week or so this strange thing started to happen in the middle of doing something :-( It turns off itself at about 9.10am every morning Just once a day and it works fine after it had it little nap. it happens both if the comp was on all night or I turned it off before leaving the office. I tried to swapped the memory as I was told that there was an issue with memory. But moving or removing any memory did not make any difference. I am not aware that I would install any program that could cause that. I installed AVG and set it up to do every day scan about 8am + if restart is needed it requires user confirmation. 'The Software Protection service has stopped' few minutes before it turned off itself but it happened also at other times without the computer turned off. I turned off Windows automatic updates as the first thing when I got the comp configuration Windows 7 Ultimate Intel Core2 Quad Q9550 at 2.8GH, 8GB RAM ST31000528AS Barracuda 7200.12 SATA 3Gb/s 1TB Update Below message are logged when I turn the comp on again. Message 1: The previous system shutdown at 9:08:54 AM on ?6/?29/?2010 was unexpected. Message 2: Level:Critical Source: Kernel Power EventID 41 Task Category (63) The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Log after manual restart Update 2 (task scheduler)

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Load is 0, yet site crawls (sometimes). What gives?

    - by Yegor
    I have a ~1.5-2mil page views per day site running on 2 servers. One for mysql, other for everything else. Mysql box has a load of 3, frontend is usually 0.0-0.1. Both are dual quad core with 8GB ram running SAS drives in raid5. CPU is idle for majority of the time, iowait is non-existent. Im running nginx, memcache, and site is built on php. Half the time everything runs perfect, while at other times it lags something severe, when it takes 10-15 seconds for a page to load. Page execution time is always super low, but it seems to hang, waiting for something before it actually loads the page. Whats even more weird is that it only happens to 1 file on the site (but its the one thats most commonly accessed, that actually loads the content on the site). Other pages are super fast at all times, even when it takes 15 seconds to load actual content. I have nginx_stats plugin installed, and if I monitor it, the lag spikes happen when the write column starts going above 100, and it frequently does... all the way to 500-1000. It does so at totally random times... not when traffic is heavy... it can do this in the middle of the night, and work perfectly at 5pm when traffic is at its highest. Any ideas?

    Read the article

  • IIS 7.5 fails to open database after computer machine on that database server is working restarts.

    - by Jenea
    Hi. I decided to post this question also here in case the issues we have is related to sql server. There is a problem that bother me for some time. I have an asp.net mvc that uses NHibernate for modeling the database. The infrastructure is the following: Windows 2008 R2 for all virtual machines. IIS 7.5 is working on one virtual machine. Sql Server 2008 is working on another virtual machine. We have couple of databases, two that stores application data and one that registers all unhandled exceptions. Sometimes virtual machine that hosts database server restarts (in the middle of the night, not quite sure about the reason) after that connection to the databases that stores application data is not working and as result there are thousands of unhandled exceptions that get registered in the third database. Important to mention that databases are accessible from Management Studio. The problem is solved by resetting IIS. Connetion are handled via NHibernateUtil class which opens and closes session at each request.

    Read the article

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • Modify or disable Windows 8 swipe gestures on touchpad / laptop

    - by Matsemann
    I have an ASUS G75VW laptop with a Synaptic touchpad (/trackpad). When I move my finger from one edge towards the middle (the swipe), Windows 8 will bring up different stuff. This is a problem because the area where I can actually move the mouse with my finger is too small (or, I mostly use the top left of the touchpad). So I often end up doing a swipe and bringing up some menu, or to do the swipe so slow that no menu is appearing but the mouse pointer is also not moving when I move my finger. Quite annoying. When swiping from left edge it earlier swapped apps like crazy. I disabled that, so now it only brings up the same menu as pressing win+tab (or some times the charms bar, I never know which). I could change that by: Win+I - Change PC settings - General - When I swipe from the left edge, switch directly to my most recent app I've tried Mouse settings in Control Panel, driver settings for my touchpad and searching for swipe and gestures on my computer (which was what led me to the setting above) with no luck. How can I disable the swipe gestures, or change what they do? Thanks.

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • USB question (how durable is it, how should I workaround this)

    - by Shiki
    The plot is quite simple. Got a Razer mouse. If I plug it in, it works. After a shutdown/hibernation, I have to replug it entirely at the back of the PC. (It works in my laptop even after severel shutdown, etc, so yes I guess it's my motherboard.. but it still got 2 years of warranty and it comes with quad SLI, its not an old motherboard at all. (MSI P7N SLI FI (bought it after a hungarian guy's recommendation)). So. I only could come up with one "solution". Get 3 USB cable (you know, USB-USB). If its possible the shortest ones (don't know if the responsibility/anything will worsen), AND replug only the middle+closest to the USB port junction, since those are replaceable. What do you think? Any other idea? (BIOS is updated, mouse driver ... doesn't really matter, the mouse won't even blink a bit after this happens. It lights up and goes totally dead.)

    Read the article

  • USB Harddisk not working on dual boot windows7/8

    - by Jesper
    Yesterday I installed Windows 8 on a machine that already had Windows 7. They are on dual boot and both systems work fine. The problem is that inserting a USB hard disk in either system does nothing. If I connect a USB mouse or mobile phone, they work fine, so the USB plugs are active/working and the USB hard drives that I am trying to connect work on my other laptop just fine. I have tried to uninstall all USB-related items in Device Manager and let them reinstall upon restart, but that didn't help. The USB drive does not show up in disk management either. The strange thing is that it is exactly the same situation on both windows. USB mice etc. work just fine and USB hard drives do not. Any ideas on solving this problem would be great. ...Don't know if it is important, but this is a Toshiba Tecra R950 Laptop. EDIT: I have found out that my other USB HD (Western Digital) works on this laptop, but for my StoreJet Transcend and Adata "something" does not work. All three work on another Windows 7 laptop. Sizewise the WD is in the middle at 400 GB. The StoreJet is 640 GB and the Adata is 200 GB.

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • UW-IMAP server, high load for one user

    - by Bruce Garlock
    We have been experiencing a very strange anomaly, with one specific user with our UW-IMAP server. We have about 75 users using the server, and one particular user, who is in about the middle as far as used storage keeps having issues with slow speed. Most of our users all use Thunderbird 2, or Thunderbird 3. Mostly 2, because of the performance issues we have had with 3. This user was on 3, and I downgraded him to 2. The performance has gotten better, but according to the imapd processes on the server, his username is using the most CPU % and CPU time. I've already done all the usual T/S'ing: Started profile from scratch, compacted folders, re-indexed, newer faster computer, etc.. Still, this users' imapd process is always using the most CPU on the server. For troubleshooting, we setup another user which has more usage, folders, etc.. than he does, but we don't see the users process taking up most of the CPU with the imapd process. So, it almost sounds like a particular email may be the culprit, but how can we find it, if thats the problem? This has been going on for a while, and he is a management person, so his patience is about to end. Does anyone have any ideas?

    Read the article

  • ability to see free/busy detail information for conference rooms in Outlook 2007 and Microsoft hosted Exchange solution

    - by Malav
    recently my company migrated from an in-house Exchange server to the Microsoft hosted exchange online solution. My client is Outlook 2007. Before the migration, I could see the details of the meetings when I hovered on the busy blue bar for a resource such as a conference room. I could click on the meetings and see the invite list and the contents of the meeting. Ofcourse if the meeting was marked as private I could not. however after the migration to the online solution, I cannot see the detailed information. I can still see if the room is busy or not but I can no longer see the details of that meeting. The IT folks can see the information and they claim that they can see it because they have full admin rights. It is their claim that in the hosted Exchange solution you can either have full access (admin access) and see the details or not see anything but just that the room is busy. there is no middle ground such as being able to see the details of the meeting but not having any admin rights. For some reason I believe this to be not true. Can someone please verify my doubts and inform me of what needs to be done to see that information if my IT folks are wrong? thanks

    Read the article

  • break Folder Protection, Folder Guard Lock or Folder in Windows XP?

    - by SonyAdi
    when I'm making a new partition by the partition magic. Then all of a sudden power failure. Unfortunately because my computer is not equipped with UPS (Power supply Uniterruptible), my computer finally died, too. When power is restored, I tried to turn on the computer. Suddenly my computer can not boot normally into windows. Option through safemode and others all I've tried. The result fails, can not boot at all, into safe mode also can not. And I know the cause. Partition Magic did not finish the work and stopped in the middle of the road and cause the transfer of data files or stopped, finally file2 any default windows were destroyed as well. Unfortunately my important data I store in my document. Finally, I take my hard drive to a friend. Hopes to open a computer hard drive through friend, at least I could save my important data, and then I can install window again by reformatting my hard drive is first. I read the hard drive in explorer my friend, complete with their data, but the data of my important data in my document can not get to go because it requires administrator privileges or the original user's default start my windows (my computer) to open my document folder tersebut.Ini actually very similar to the work or Folder Protection Folder Guard. result I was disappointed and almost desperate to get back my important data is. how do i break Folder Protection, Folder Guard Lock or Folder in Windows XP?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >