Search Results

Search found 2197 results on 88 pages for 'explanation'.

Page 77/88 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • PHP & MySQL on Mac OS X: Access denied for GUI user

    - by Eirik Lillebo
    Hey! This question was first posted to Stack Overflow, but as it is perhaps just as much a server issue I though it might be just as well to post it here also. I have just installed and configured Apache, MySQL, PHP and phpMyAdmin on my Macbook in order to have a local development environment. But after I moved one of my projects over to the local server I get a weird MySQL error from one of my calls to mysql_query(): Access denied for user '_securityagent'@'localhost' (using password: NO) First of all, the query I'm sending to MySQL is all valid, and I've even testet it through phpMyAdmin with perfect result. Secondly, the error message only happens here while I have at least 4 other mysql connections and queries per page. This call to mysql_query() happens at the end of a really long function that handles data for newly created or modified articles. This basically what it does: Collect all the data from article form (title, content, dates, etc..) Validate collected data Connect to database Dynamically build SQL query based on validated article data Send query to database before closing the connection Pretty basic, I know. I did not recognize the username "_securityagent" so after a quick search I came across this from and article at Apple's Developer Connection talking about some random bug: Mac OS X's security infrastructure gets around this problem by running its GUI code as a special user, "_securityagent". Then I tried put a var_dump() on all variables used in the mysql_connect() call, and every time it returns the correct values (where username is not "_securityagent" of course). Thus I'm wondering if anyone has any idea why 'securityagent' is trying to connect to my database - and how I can keep this error from occurring when I call mysql_query(). Update: Here is the exact code I'm using to connect to the database. But a little explanation must follow: The connection error happens at a call to mysql_query() in function X in class_1 class_1 uses class_2 to connect to database class_2 reads a config file with the database connection variables (host, user, pass, db) class_2 connect to the database through the following function: var $SYSTEM_DB_HOST = ""; function connect_db() { // Reads the config file include('system_config.php'); if (!($SYSTEM_DB_HOST == "")) { mysql_connect($SYSTEM_DB_HOST, $SYSTEM_DB_USER, $SYSTEM_DB_PASS); @mysql_select_db($SYSTEM_DB); return true; } else { return false; } }

    Read the article

  • Stack-based keyboard delay using Logitech MX3100 keyboard

    - by Mark S. Rasmussen
    I've been using a Logitech Cordless Desktop MX3100 keyboard for quite a while. I've never really had any problems, except for the occasional typo. I noticed however that I tended make the typo "Laod" instead of "Load", quite a bit more often than any other typos. As it started to get on my nerves, I decided to do some testing. What I found out was than when I write lowercase "load", I'd never make the typo. All uppercase, or just uppercase L, I'd make the typo quite often. My actual (very scientific) testing is probably best described by showing the output: moatmoatmoat MoatMoatMoat loatloatloat LaotLaotLaot loafloafloaf LaofLaofLaof hoathoathoat HoatHoatHoat hoadhoadhoad HoadHoadHoad lortlortlort LrotLrotLrot What i found out was that whenever shift was depressed, typing an uppercase "L" would induce a significant lag if the next character was an "o", compared to the lag of the any other key: High "o" lag: LoLoLoLoLoLo No "a" lag: LaLaLaLaLaLa No lag for neither "o" nor "a": lolololololo lalalalalala By realizing this I regained a slight bit of sanity as I knew I wasn't coming down with a case of Parkinsons. I was actually typing correctly, the lag just interpreted it wrongly. Now, what really bugs me is that I can't fathom how this is occurring. What I'm actually typing, in physical order, is this: L - o - a - d, and yet, the "a" is output before the "o", even though "o" was pressed before "a". So while the keyboard is processing the "Lo" combo, the "a" gets prioritized and is inserted before the "o" is done processing, resulting in Laod instead of Load. And this only happens when typing "Lo", not when typing lowercase "lo". This problem could stem from the keyboard hardware, the receiver hardware or the keyboard software driver. No matter the fault location however, I can't imagine how this could be implemented as anything but a FIFO queue. A general delay, sure, I could live with that, albeit I'd be irritated. But a lag affecting different keys differently, and even resulting in unpredictable outcome - that just doesn't make any sense. I've solved the problem by just switching to a wired keyboard. I just can't shake it off me though; what kind of bug/error/scenario would result in a case like this? Edit: It's been suggested that I stop drinking Red Bull and stick to water instead. While that may actually help solve the issue, I'm really not looking for a solution as such. I'm more interested in an explanation of how this could happen, as I can't imagine any viable technical solution that could result in this behavior.

    Read the article

  • IKE Phase 1 Aggressive Mode exchange does not complete

    - by Isaac Sutherland
    I've configured a 3G IP Gateway of mine to connect using IKE Phase 1 Aggressive Mode with PSK to my openswan installation running on Ubuntu server 12.04. I've configured openswan as follows: /etc/ipsec.conf: version 2.0 config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn net-to-net authby=secret left=192.168.0.11 [email protected] leftsubnet=10.1.0.0/16 leftsourceip=10.1.0.1 right=%any [email protected] rightsubnet=192.168.127.0/24 rightsourceip=192.168.127.254 aggrmode=yes ike=aes128-md5;modp1536 auto=add /etc/ipsec.secrets: @left.paxcoda.com @right.paxcoda.com: PSK "testpassword" Note that both left and right are NAT'd, with dynamic public IP's. My left ISP gives my router a public IP, but my right ISP gives me a shared dynamic public IP and dynamic private IP. I have dynamic dns for the public ip on the left side. Here is what I see when I sniff the ISAKMP protocol: 21:17:31.228715 IP (tos 0x0, ttl 235, id 43639, offset 0, flags [none], proto UDP (17), length 437) 74.198.87.93.49604 > 192.168.0.11.isakmp: [udp sum ok] isakmp 1.0 msgid 00000000 cookie da31a7896e2a1958->0000000000000000: phase 1 I agg: (sa: doi=ipsec situation=identity (p: #1 protoid=isakmp transform=1 (t: #1 id=ike (type=enc value=aes)(type=keylen value=0080)(type=hash value=md5)(type=auth value=preshared)(type=group desc value=modp1536)(type=lifetype value=sec)(type=lifeduration len=4 value=00015180)))) (ke: key len=192) (nonce: n len=16 data=(da31a7896e2a19582b33...0000001462b01880674b3739630ca7558cec8a89)) (id: idtype=FQDN protoid=0 port=0 len=17 right.paxcoda.com) (vid: len=16) (vid: len=16) (vid: len=16) (vid: len=16) 21:17:31.236720 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 456) 192.168.0.11.isakmp > 74.198.87.93.49604: [bad udp cksum 0x649c -> 0xcd2f!] isakmp 1.0 msgid 00000000 cookie da31a7896e2a1958->5b9776d4ea8b61b7: phase 1 R agg: (sa: doi=ipsec situation=identity (p: #1 protoid=isakmp transform=1 (t: #1 id=ike (type=enc value=aes)(type=keylen value=0080)(type=hash value=md5)(type=auth value=preshared)(type=group desc value=modp1536)(type=lifetype value=sec)(type=lifeduration len=4 value=00015180)))) (ke: key len=192) (nonce: n len=16 data=(32ccefcb793afb368975...000000144a131c81070358455c5728f20e95452f)) (id: idtype=FQDN protoid=0 port=0 len=16 left.paxcoda.com) (hash: len=16) (vid: len=16) (pay20) (pay20) (vid: len=16) However, my 3G Gateway (on the right) doesn't respond, and I don't know why. I think left's response is indeed getting through to my gateway, because in another question, I was trying to set up a similar scenario with Main Mode IKE, and in that case it looks as though at least one of the three 2-way main mode exchanges succeeded. What other explanation for the failure is there? (The 3G Gateway I'm using on the right is a Moxa G3150, by the way.)

    Read the article

  • Tying down a cloud by virtualizing everything and then locking VMs to real hardware as necessary

    - by tudor
    I'm looking for a cloud software solution that: Can run on both server and desktop machines; Virtualizes hardware and has the option of exposing each real machine to the cloud; Allows a VM to be "locked" to a set of real hardware capabilities and stay there until moved (e.g. a user's "real" desktop); Allows a VM to link to some types of devices elsewhere (e.g. USB/serial via ethernet); and Is geography-aware to control movement of VMs between real networks. I'm aware that this may be the holy grail of virtualization, and I've searched alot. Some solutions appear to meet some criteria but not others. Most cloud implementations appear to ignore real hardware, for example. I realise that this may be solved by using three different implementations in combination: A standard cloud server farm. A bare-metal network backup utility with PXEBoot. VNC and/or VDI. (VNC obviously would require the real hardware to be running.) This combination, however, has some serious drawbacks that I'd like to solve by treating it as one system. My explanation follows... I have a network of real servers and desktops in multiple locations. I've virtualized servers before using Virtualbox and that's worked quite well. I've even connected USB devices to VMs on servers. I would like to virtualize the desktops in all my offices to facilitate movement of desktops, remote access (e.g. VDI) and bare-metal backups. However, I know that there are problems with this. For example, some desktops have specific hardware (e.g. 3D graphics cards, USB devices, etc) that limit their mobility. Geographic constraints also limit movement in that VMs can be moved easily within offices, but transferring between offices is not always preferable. What I would like to find is a system that can virtualize everything from bare-metal easily by maintaining an abstraction layer on each client and server machine that exposes the hardware available and runs as a cloud. Then certain VMs would be "locked" to specific hardware (so that, e.g. the VM runs only on their own desktop.) This would be required for situations where speed is important (e.g. 3D graphics pass-through). In addition, abstracted low-speed devices (e.g. USB) could be piped from real hardware to a VM in the cloud. This is important since if a VM is taken down, another VM can connect to the real hardware for minimum downtime.

    Read the article

  • How might I stop BACKSCATTER using Qmail?

    - by alecb
    New to ServerFault , please pardon if my details are too much Linux box acting as Virtual Host for domain hosting. Runs CentOs. Runs Parallels Plesk 9.x Regardless of the following, the SPAM keeps flowing in at 1-3 / second. An explanation of the problem... "xinetd service listens for SMTP connections and forwards to qmail-smtpd. The qmail service only process the queue, but does not control messages coming into the queue...that's why stopping it has no effect. If you stop xinetd AND qmail, then kill any open qmail-smtpd processes, all mail flow comes to a stop SOMETIMES Problem is, qmail-smtpd is not smart enough to check for valid mailboxes on the localhost before accepting the mail. So, it accepts bad mail with a forged replyto address which gets processed in the queue by qmail. Qmail cannot deliver locally and bounces to the forged replyto address." We believe the fix is to patch the qmail-smtpd process to give it the intelligence to check for the existence of local mailboxes BEFORE accepting the message. The problem is when we try to compile the chkuser patch we run into failures due to Plesk Control Panel." Is anyone aware of something we could do differently or better?" Other things that have NOT worked thus far: -Turning off any and all mail processes (to check as an indicator that an individual account has been compromised. This has been verified as NOT the case.) -Turning off mail AND http server processes (in the case of a compromised formmail) -Running EXIM in lieu of Qmail( easy/quick install but xinetd forces exim to close and restarts qmail on its own) -Turned on SPF protection via Plesk GUI. Does not help. -Turned on Greylisting via Plesk GUI. Does not help. -Disabled Bounce notifications via command line That which MIGHT work but have complications: -Use POSTFIX instead of QMAIL (No knowledge of POSTIFX and don't want to bother with it unless anyone knows it has potential to handle backscatter WELL before investing time) -As mentioned above, compiling a chkusr patch, we believe will STOP this problem, along with qmail (because of plesk in the mix, the comile fails every time and Parallels Plesk support is unresponsive unless I cough up MONEY) If I don't clear out the SPAM from the outgoing mail queue nightly, then it clogs up with millions of SPAMs and will bring down the OUTGOING email services. Any and all help welcome and appreciated!

    Read the article

  • Powershell Exchange script returning inconsistent results - PS weirdness

    - by Ian
    Maybe someone can shed some light on a bit of Powershell weirdness I've come across and can't explain. This PS script returns a list of exchange databases, their sizes and the number of mailboxes in each one: Get-MailboxDatabase | Select Server, StorageGroupName, Name, @{Name="Size (GB)";Expression={$objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$"+ $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1048576KB; [math]::round($size, 2)}}, @{Name="Size (MB)";Expression={$objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$"+ $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1024KB; [math]::round($size, 2)}}, @{Name="No. Of Mbx";expression={(Get-Mailbox -Database $_.Identity | Measure-Object).Count}} | Format-table –AutoSize If I add a simple 'sort name' before the 'format-table' my resulting table contains blanks where the database sizes and number of mailboxes should appear (not zeros, blank empty space).... but only in some rows, not all rows. Some rows contain numbers! If I put the '|sort name| ' after the initial 'get-mailboxdatabase' it works fine. Whats even weirder is if I do the following: Execute the above command Add the sort before format-table Execute the new command Execute the initial command again What I see is different amounts in each of the three cases - all of which are incorrect. Yet 1 and 3 are the same command and the only difference with 2 is a sort. 1 and 3 should, at a minimum, return the same results. I get blanks where I should have MBs If I add the sort after the get-mailboxdatabase, it always returns identical results (as it should). Can anyone suggest an explanation as to what may be going on? If its of any help in reading the expression, I've reformatted it here to make it a bit more readable: Get-MailboxDatabase | Select Server, StorageGroupName, Name, @{Name="Size (GB)";Expression={ $objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$" + $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1048576KB; [math]::round($size, 2) }}, @{Name="Size (MB)";Expression={ $objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$" + $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1024KB; [math]::round($size, 2) }}, @{Name="No. Of Mbx";expression={ (Get-Mailbox -Database $_.Identity | Measure-Object).Count }} | Format-table –AutoSize

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • UAE and the mysteries of unreachable websites

    - by 0plus1
    I write here because I'm really lost, please stay with me because it's not easy to explain. A company asked me to set-up a private server, now I'm a programmer so I got a solution with technical support and cpanel which helped me to setup everything and it's working smoothless. I'm by no means a professional sysadmin, but I have a fair knowledge of server configurations, but this problem is way over my knowledge, and apparently way over the knowledge of most sysadmins, I really hope that here I'll find someone with enough experience to help me or at least give me more insight. Now this company for which I'm consulting operates in the UAE (United Arab Emirates) and from there the server is almost unreachable. It started with ns not registering in the UAE, after a week that sorted itself out and now the site is indeed reachable, but it takes almost 2 minutes to load a webpage with one line of text. Emails go in timeout. The domain currently parked there has been bought appositely for tests, the main one that was supposed to go there, after a catastrophic week has been transferred to a shared hosting solution in the UK, and from there it works like a charme. Now after doing some research I discovered that I'm not alone in this, there are several reports of webmasters discovering that their website is not reachable inside the UAE, and mind this has nothing to do with the state-wide block of questionable sites, because in that case an error message appears, this seems to be related to the infrastructure of the UAE, which apparently reroutes everything through their own "fake" internet. Apparently new servers with their own IP are not recognized (yet?) by the UAE infrastructure, while shared hosting solutions seeing that they operates tons of other websites are more likely to be part of the UAE network. Now my questions are: 1) Has someone a real explanation for this? The only thing I can think of is that the server is on a new IP that is not yet recognized by the UAE, but that doesn't explain why it loads (even if after 2 minutes). I don't have any help from within the UAE as the only people that are "experts" are questionable companies that simply try to sell their own services. 2) If there is really some kind of block of new servers, is it possible to know before if a server is reachable from within the UAE, currently this is not a ns problem as even accessing the server with its IP result in a 2 minute wait. 3) Can it be that the problem lies somewhere else? There are some tests that I can perform? I'm not physically in the UAE, but I can ask the people there, or use teamviewer. Could it be some misconfiguration on the server (mind that the site works EVERYWHERE else in the world). Thank you for ANY kind of help

    Read the article

  • Linux per-process resource limits - a deep Red Hat Mystery

    - by BobBanana
    I have my own multithreaded C program which scales in speed smoothly with the number of CPU cores.. I can run it with 1, 2, 3, etc threads and get linear speedup.. up to about 5.5x speed on a 6-core CPU on a Ubuntu Linux box. I had an opportunity to run the program on a very high end Sunfire x4450 with 4 quad-core Xeon processors, running Red Hat Enterprise Linux. I was eagerly anticipating seeing how fast the 16 cores could run my program with 16 threads.. But it runs at the same speed as just TWO threads! Much hair-pulling and debugging later, I see that my program really is creating all the threads, they really are running simultaneously, but the threads themselves are slower than they should be. 2 threads runs about 1.7x faster than 1, but 3, 4, 8, 10, 16 threads all run at just net 1.9x! I can see all the threads are running (not stalled or sleeping), they're just slow. To check that the HARDWARE wasn't at fault, I ran SIXTEEN copies of my program independently, simultaneously. They all ran at full speed. There really are 16 cores and they really do run at full speed and there really is enough RAM (in fact this machine has 64GB, and I only use 1GB per process). So, my question is if there's some OPERATING SYSTEM explanation, perhaps some per-process resource limit which automatically scales back thread scheduling to keep one process from hogging the machine. Clues are: My program does not access the disk or network. It's CPU limited. Its speed scales linearly on a single CPU box in Ubuntu Linux with a hexacore i7 for 1-6 threads. 6 threads is effectively 6x speedup. My program never runs faster than 2x speedup on this 16 core Sunfire Xeon box, for any number of threads from 2-16. Running 16 copies of my program single threaded runs perfectly, all 16 running at once at full speed. top shows 1600% of CPUs allocated. /proc/cpuinfo shows all 16 cores running at full 2.9GHz speed (not low frequency idle speed of 1.6GHz) There's 48GB of RAM free, it is not swapping. What's happening? Is there some process CPU limit policy? How could I measure it if so? What else could explain this behavior? Thanks for your ideas to solve this, the Great Xeon Slowdown Mystery of 2010!

    Read the article

  • Webserver Responses Hanging

    - by drscroogemcduck
    From some networks requesting certain images on our webserver is very flakey. I've looked at tcpdumps on both sides and the server sends back part of the file and the client ACKs the TCP packet but the server never receives the ACK. The servers view: 41 19.941136 212.169.34.114 209.20.73.85 TCP 52456 > http [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=2 42 19.941136 209.20.73.85 212.169.34.114 TCP http > 52456 [SYN, ACK] Seq=0 Ack=1 Win=5440 Len=0 MSS=1360 46 20.041142 212.169.34.114 209.20.73.85 TCP 52456 > http [ACK] Seq=1 Ack=1 Win=65280 Len=0 47 20.045142 212.169.34.114 209.20.73.85 HTTP GET /map/map/s+74-WBkWk0aR28Yy-YjXA== HTTP/1.1 48 20.045142 209.20.73.85 212.169.34.114 TCP http > 52456 [ACK] Seq=1 Ack=522 Win=6432 Len=0 49 20.045142 209.20.73.85 212.169.34.114 TCP [TCP segment of a reassembled PDU] (Part of the content of the image 2720 bytes. i assume it is reassembled in tcpdump and it is fragmented over the wire.) ** never receives the ACK sent in frame 282 and will eventually resend the tcp segment ** The clients view: 274 26.161773 10.0.16.67 209.20.73.85 TCP 52456 > http [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=2 276 26.262867 209.20.73.85 10.0.16.67 TCP http > 52456 [SYN, ACK] Seq=0 Ack=1 Win=5440 Len=0 MSS=1360 277 26.263255 10.0.16.67 209.20.73.85 TCP 52456 > http [ACK] Seq=1 Ack=1 Win=65280 Len=0 278 26.265193 10.0.16.67 209.20.73.85 HTTP GET /map/map/s+74-WBkWk0aR28Yy-YjXA== HTTP/1.1 279 26.365562 209.20.73.85 10.0.16.67 TCP http > 52456 [ACK] Seq=1 Ack=522 Win=6432 Len=0 280 26.368002 209.20.73.85 10.0.16.67 TCP [TCP segment of a reassembled PDU] (Part of the content of the image. Only 1400 bytes.) 282 26.571380 10.0.16.67 209.20.73.85 TCP 52456 > http [ACK] Seq=522 Ack=1361 Win=65280 Len=0 The network we are having trouble with is NATd. Is there any kind of explanation for this weirdness?

    Read the article

  • How to temporary disable a mirror video driver in windows xp registry

    - by happy clicker
    Because its a lot of text, I will ask first my question and then explain what the base problem is. Perhaps someone can give me a solution to the base problem: Is there is a way to temporary disable mirror video drivers (through registry or so), without uninstalling the corresponding software. I tested changing the enumeration in LocalMachine\Hardware\DeviceMap\Video but after reboot always the old configuration is restored. Explanation of the base problem We are working on a wpf-project for a department of a big company. There we have the problem that WPF renders only in software mode, although the hardware they have, must support hardware rendering (Tier 2). After searching for a solution to the problem, we found out that direct 3d does not work properly and we think thats why WPF can only use SW-rendering. In dxdiag.exe the direct3d-acceleration is enabled, but if we start the test-routine it always fails saying that it has not enough memory (it says memory, not video memory!). I have seen there 3 different types of pc’s (they have some hundreds of each type) and every type shows the exactly same behavior. We tried to update all the drivers, also dx (Version 9.0c) and we searched a lot in the web but could not find a solution. All the pcs have Intel Dual-Core processors or better, one type has an Intel gma 9000 graphics card the other two types have actual ATI and NVidia graphic-cards with 256MB onboard memory. Also the system memory is at least 2GB. Windows is XPSP3. The pc’s are of two different manufacturers. Because we see the exactly same behavior on every computer of this three very different computer-types, we don’t think that this is a driver or a direct x problem. What we’ve found in other newsgroups is, that direct x could be disturbed through mirror-video drivers such as NetMeeting, VNC and other remote desktop-installations. In the registry, we see under LocalMachine\Hardware\DeviceMap\Video a lot of such mirror-entries and we find also the definitions in the CurrentControlSet\Control\Video-Section (However this drivers are not shown in the hardware panel of the os). We can have admin-rights to one of these computers to test if disabling these drivers would help, but we must not change the configuration so that some software does not work after the tests. Therefore I cannot uninstall any software because I have not the mediums, licenses or knowhow to reinstall those apps. The support of this company however will only begin to work, if I can tell them what the real problem is. Thats why we search for a way to disable these mirror-drivers (or a hint to solve the dx problem if we are on a false trace)

    Read the article

  • PC will POST whenever feels likes it

    - by kyrpas
    I'm really sick of my PC and I'd love to throw it off the 5th floor but unfortunately I don't have this luxury right now. The issues started when I moved to a new house about 2 months ago. I didn't have this problem before. Case: Arctic Cooling Silentium T1 with embedded Fusion 550 Eco 80 PSU. M/B: ASRock A790GMH/128M Gfx: ATI Radeon HD 5770 Here's what's happening almost on a daily basis: I wake up in the morning, switch on the PC and all the fans start spinning. 9/10 the graphics fan stays on 100% and I know it won't post. If I'm lucky, ATI's fan stays on full power for a second, then goes back to normal and I get a normal post but that doesn't happen often. No, instead it's just drives me crazy. When I get no POST I'm trying a lot of different things and what bothers me the most is that they all work. But not always. No... That way I could find out what the hell is going on and we don't want that.. right? So, sometimes it manages to POST if I: remove the keyboard remove the power cable for a few minutes remove the graphics card remove the HDD cables do nothing, just turn it on and off a few times Sometimes it doesn't POST even if I do all of the above. And I end up removing all power cables from the M/B, and connecting all the stuff one by one. Sometimes it works, sometimes it doesn't and I just have to pray and wait. What the hell is that? I'm getting pissed of again just thinking about it. The only solution is to leave it on 24/7 but I don't want to do that. It should be able to turn on and off when I press the power button. I'm not asking much. I'm starting to think there's some weird electricity/power issue but I really don't understand what it is. There's no logical explanation about it. At least I can't find one. Any ideas?

    Read the article

  • Proper set up shared folders for users

    - by user221486
    First I would like to say thanks for helping, and I have huge problem with proper set up permission for shared folders. I have Windows 7 x64 ent. - name: backupfb - added to domain with shared folder on drive e: (e:\backup) 50 clients/laptops with TSM Tivoli fastback for workstations who save files on shared folder And I need to configure proper permission for my shared folders that only owner of folder can access to their folders. Folder structure is: e:\backup <- shared as a "backup" folder \\backupfb\backup\ e:\backup\BackupAdmin <-- directory is used by the Tivoli Storage Manager FastBack for Workstations client to download revisions and configurations. Nodes require read-only access to these directories e:\backup\RealTimeBackup <-- enable client accounts to create directories that are only accessible by the account that created them. As a result, the directory that contains data for a node is not created until that node connects to the server. So permission should look like that (take from instructions): Inheritable permissions from object`s parents are DISABLE Permission entries: \\backupfb\backup\BackupAdmin Allow Users Read, Execute This folder, subfolders, and files Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Delete subfolders and files Allow Delete Allow Read Permission’s Allow Allow Administrators Full Control This folder, subfolders, and files Both folders have enabled option "apply these permissions to objects and/or containers within this container only" Here everything works fine \\backupfb\backup\RealTimeBackup <<-- Allow Administrators Full Control This folder, subfolders, and files Allow CREATOR OWNER Full Control This folder, subfolders, and files (from domain) Allow Users Special This folder only Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Create Files / Write Data Allow Create Folders / Append Data Allow Delete subfolders and files Allow Read Permission’s Allow Allow OWNER RIGHTS* Full Control This folder, subfolders, and files Here I have huge problem with CREATOR OWNER Im able to set FULL CONTROL but I can only apply "Subfolders and files only". When I change props. to "This folder, subfolders and files" and save its change to "Subfolders and files only" So I try use icacls to set up permissions @echo off takeown /F E:\backup\ /R /A for /D %%i IN (E:\backup\RealTimeBackup*) DO icacls E:\backup\RealTimeBackup\%%~nxi /grant:r cloud\%%~nxi:F /T /C pause but after that user are able to create just one folder in \backupfb\backup\RealTimeBackup\userfolder but problem is with subfolders In log i have: FBW5022E Unable to access the specified file Explanation: The file specified is unable to be accessed. Possibly spelled incorrectly, or bad path, or permissions. User response: Ensure the user has the proper permissions for the file and directories involved andthat the file and directory exist Any idea ?? pls help ;-) thanks

    Read the article

  • Remote Debian System Preventing Logon

    - by choobablue
    I have a dozen or so single board computers on a network running Debian (squeeze) and access them via ssh (ssh server is dropbear). To give an idea of the hardware of these computers they're 1.2 GHz x86 processors, 1GB of RAM and 4GB flash drives formatted as ext2 (I avoided ext3 to prevent the added flash write stress from journaling), there is also a swap partition on the drive. Normally the setup I'm using works great and I can access all the computers. Every once in a while one will prevent access. What happens is I try to connect via ssh (putty) and it gives me the login prompt, I enter the username and password and it responds 'Access Denied' and it will also refuse any public key in ~/.ssh/authorized_keys. The credentials are correct as they worked previously. The computer responds to pings and putty recognizes the server public key, which implies to me the system is still running. Restarting the server fixes the problem and I can log in again. (I tried a temporary fix of putting shutdown -r now in the root crontab but this doesn't seem to reliably be run once the hang happens) Once I restart however there doesn't seem to be any information in any of the system logs to indicate what happened, the logs are simply empty for that time period, as if the system had crashed. There is some custom software running on the system which appears to stop working (which is why I wanted to ssh to begin with). I'm assuming that this program is the source of the problems but I'm unsure of how it would cause it and how to debug what is happening. The most likely explanation I can think of is that there is a memory leak in the other program that then prevents dropbear from spawning a new login shell (and crontab from executing shutdown) as there is not enough free memory. But looking at memory usage of the other (working) computers there doesn't seem to be any meaningful increase in memory to indicate a leak (unless it's a very big, fast acting and rare leak). I would think that when the OS ran out of memory it would restart the system or kill processes (the Linux kernel restarts right?). The other thing I wonder about is if the fact that they are running off a flash drive could have some effect, especially the swap partition (which I think I should remove to prevent wear of the flash), but the flash drives are young (~1 month) and I don't think that wear would be a factor yet. Does anybody have an idea of what could cause these symptoms, if it could be done by a memory leak, or something else I haven't thought of. And does anybody know of a method to try to debug the problem and find out more information about what's going wrong?

    Read the article

  • client problems - misaligned expectations & not following SDLC protocols

    - by louism
    hi guys, im having some serious problems with a client on a project - i could use some advice please the short version i have been working with this client now for almost 6 months without any problems (a classified website project in the range of 500 hours) over the last few days things have drastically deteriorated to the point where ive had to place the project on-hold whilst i work-out what to do (this has pissed the client off even more) to be simplistic, the root cause of the issue is this: the client doesnt read the specs i make for him, i code the feature, he than wants to change things, i tell him its not to the agreed spec and that that change will have to be postponed and possibly charged for, he gets upset and rants saying 'hes paid for the feature' and im not keeping to the agreement (<- misalignment of expectations) i think the root cause of the root cause is my clients failure to take my SDLC protocols seriously. i have a bug tracking system in place which he practically refuses to use (he still emails me bugs), he doesnt seem to care to much for the protocols i use for dealing with scope creep and change control the whole situation came to a head recently where he 'cracked it' (an aussie term for being fed-up). the more terms like 'postponed for post-launch implementation', 'costed feature addition', and 'not to agreed spec' i kept using, the worse it got finally, he began to bully me - basically insisting i shut-up and do the work im being paid for. i wrote a long-winded email explaining how wrong he was on all these different points, and explaining what all the SDLC protocols do to protect the success of the project. than i deleted that email and wrote a new one in the new email, i suggested as a solution i write up a list of grievances we both had. we than review the list and compromise on different points: he gets some things he wants, i get some things i want. sometimes youve got to give ground to get ground his response to this suggestion was flat-out refusal, and a restatement that i should just get on with the work ive been paid to do so there you have the very subjective short version. if you have the time and inclination, the long version may be a little less bias as it has the email communiques between me and my client the long version (with background) the long version works by me showing you the email communiques which lead to the situation coming to a head. so here it is, judge for yourself where the trouble started... 1. client asked me why something was missing from a feature i just uploaded, my response was to show him what was in the spec: it basically said the item he was looking for was never going to be included 2. [clients response...] Memo Louis, We are following your own title fields and keeping a consistent layout. Why the big fuss about not adding "Part". It simply replaces "model" and is consistent with your current title fields. 3. [my response...] hi [client], the 'part' field appeared to me as a redundancy / mistake. i requested clarification but never received any in a timely manner (about 2 weeks ago) the specification for this feature also indicated it wasnt going to be included: RE: "Why the big fuss about not adding "Part" " it may not appear so, but it would actually be a lot of work for me to now add a 'Part' field it could take me up to 15-20 minutes to properly explain why its such a big undertaking to do this, but i would prefer to use that time instead to work on completing your v1.1 features as a simplistic explanation - it connects to the change in paradigm from a 'generic classified ad' model to a 'specific attributes for specific categories' model basically, i am saying it is a big fuss, but i understand that it doesnt look that way - after all, it is just one ity-bitty field :) if you require a fuller explanation, please let me know and i will commit the time needed to write that out also, if you recall when we first started on the project, i said that with the effort/time required for features, you would likely not know off the top of your head. you may think something is really complex, but in reality its quite simple, you might think something is easy - but it could actually be a massive trauma to code (which is the case here with the 'Part' field). if you also recalled, i said the best course of action is to just ask, and i would let you know on a case-by-case basis 4. [email from me to client...] hi [client], the online catalogue page is now up live (see my email from a few days ago for information on how it works) note: the window of opportunity for input/revisions on what data the catalogue stores has now closed (as i have put the code up live now) RE: the UI/layout of the online catalogue page you may still do visual/ui tweaks to the page at the moment (this window for input/revisions will close in a couple of days time) 5. [email from client to me...] *(note: i had put up the feature & asked the client to review it, never heard back from them for a few days)* Memo Louis, Here you go again. CLOSED without a word of input from the customer. I don't think so. I will reply tomorrow regarding the content and functionality we require from this feature. 5. [from me to client...] hi [client]: RE: from my understanding, you are saying that the mini-sale yard control would change itself based on the fact someone was viewing for parts & accessories <- is that correct? this change is outside the scope of the v1.1 mini-spec and therefore will need to wait 'til post launch for costing/implementation 6. [email from client to me...] Memo Louis, Following your v1.1 mini-spec and all your time paid in full for the work selected. We need to make the situation clear. There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon. You have undertaken to complete the Parts and accessories feature as follows. Obviously, as part of this process the "mini search" will be effected, and will require "adaption to make sense". 7. [email from me to client...] hi [client], RE: "There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon." a few points to consider: 1) the specification for the 'parts & accessories' feature was as follows: (i.e. [what] "...we have agreed upon.") 2) you have received the 'parts & accessories' feature free of charge (you have paid $0 for it). ive spent two days coding that feature as a gesture of good will i would request that you please consider these two facts carefully and sincerely 8. [email from client to me...] Memo Louis, I don't see how you are giving us anything for free. From your original fee proposal you have deleted more than 30 hours of included features. Your title "shelved features". Further you have charged us twice by adding back into the site, at an addition cost, some of those "shelved features" features. See v1.1 mini-spec. Did include in your original fee proposal a change request budget but then charge without discussion items included in v1.1 mini-spec. Included a further Features test plan for a regression test, a fee of 10 hours that would not have been required if the "shelved features" were not left out of the agreed fee proposal. I have made every attempt to satisfy your your uneven business sense by offering you everything your heart desired, in the v1.1 mini-spec, to be left once again with your attitude of "its too hard, lets leave it for post launch". I am no longer accepting anything less than what we have contracted you to do. That is clearly defined in v1.1 mini-spec, and you are paid in advance for delivering those items as an acceptable function. a few notes about the above email... i had to cull features from the original spec because it didnt fit into the budget. i explained this to the client at the start of the project (he wanted more features than he had budget hours to do them all) nothing has been charged for twice, i didnt charge the client for culled features. im charging him to now do those culled features the draft version of the project schedule included a change request budget of 10 hours, but i had to remove that to meet the budget (the client may not have been aware of this to be fair to them) what the client refers to as my attitude of 'too hard/leave it for post-launch', i called a change request protocol and a method for keeping scope creep under control 9. [email from me to client...] hi [client], RE: "...all your grievances..." i had originally written out a long email response; it was fantastic, it had all these great points of how 'you were wrong' and 'i was right', you would of loved it (and by 'loved it', i mean it would of just infuriated you more) so, i decided to deleted it start over, for two reasons: 1) a long email is being disrespectful of your time (youre a busy businessman with things to do) 2) whos wrong or right gets us no closer to fixing the problems we are experiencing what i propose is this... i prepare a bullet point list of your grievances and my grievances (yes, im unhappy too about how things are going - and it has little to do with money) i submit this list to you for you to add to as necessary we then both take a good hard look at this list, and we decide which areas we are willing to give ground on as an example, the list may look something like this: "louis, you keep taking away features you said you would do" [your grievance 2] [your grievance 3] [your grievance ...] "[client], i feel you dont properly read the specs i prepare for you..." [my grievance 2] [my grievance 3] [my grievance ...] if you are willing to give this a try, let me know will it work? who knows. but if it doesnt, we can always go back to arguing some more :) obviously, this will only work if you are willing to give it a genuine try, and you can accept that you may have to 'give some ground to get some ground' what do you think? 10. [email from client to me ...] Memo Louis, Instead of wasting your time listing grievances, I would prefer you complete the items in v1.1 mini-spec, to a satisfactory conclusion. We almost had the website ready for launch until you brought the v1.1 mini-spec into the frame. Obviously I expected you could complete the v1.1 mini-spec in a two-week time frame as you indicated and give the site a more profession presentation. Most of the problems have been caused by you not following our instructions, but deciding to do what you feel like at the time. And then arguing with us how the missing information is not necessary. For instance "Parts and Accessories". Why on earth would you leave out the parts heading, when it ties-in with the fields you have already developed. It replaces "model" and is just as important in the context of information that appears in the "Details" panel. We are at a stage where the the v1.1 mini-spec needs to be completed without further time wasting and the site is complete (subject to all features working). We are on standby at this end to do just that. Let me know when you are back, working on the site and we will process and complete each v1.1 mini-spec, item by item, until the job is complete. 11. [last email from me to client...] hi [client], based on this reply, and your demonstrated unwillingness to compromise/give any ground on issues at hand, i have decided to place your project on-hold for the moment i will be considering further options on how to over-come our challenges over the next few days i will contact you by monday 17/may to discuss any new options i have come up with, and if i believe it is appropriate to restart work on your project at that point or not told you it was long... what do you think?

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

  • Computer Networks UNISA - Chap 12 &ndash; Networking Security

    - by MarkPearl
    After reading this section you should be able to Identify security risks in LANs and WANs and design security policies that minimize risks Explain how physical security contributes to network security Discuss hardware and design based security techniques Understand methods of encryption such as SSL and IPSec, that can secure data in storage and in transit Describe how popular authentication protocols such as RADIUS< TACACS,Kerberos, PAP, CHAP, and MS-CHAP function Use network operating system techniques to provide basic security Understand wireless security protocols such as WEP, WPA and 802.11i Security Audits Before spending time and money on network security, examine your networks security risks – rate and prioritize risks. Different organizations have different levels of network security requirements. Security Risks Not all security breaches result from a manipulation of network technology – there are human factors that can play a role as well. The following categories are areas of considerations… Risks associated with People Risks associated with Transmission and Hardware Risks associated with Protocols and Software Risks associated with Internet Access An effective security policy A security policy identifies your security goals, risks, levels of authority, designated security coordinator and team members, responsibilities for each team member, and responsibilities for each employee. In addition it specifies how to address security breaches. It should not state exactly which hardware, software, architecture, or protocols will be used to ensure security, nor how hardware or software will be installed and configured. A security policy must address an organizations specific risks. to understand your risks, you should conduct a security audit that identifies vulnerabilities and rates both the severity of each threat and its likelihood of occurring. Security Policy Content Security policy content should… Policies for each category of security Explain to users what they can and cannot do and how these measures protect the networks security Should define what confidential means to the organization Response Policy A security policy should provide for a planned response in the event of a security breach. The response policy should identify the members of a response team, all of whom should clearly understand the the security policy, risks, and measures in place. Some of the roles concerned could include… Dispatcher – the person on call who first notices the breach Manager – the person who coordinates the resources necessary to solve the problem Technical Support Specialist – the person who focuses on solving the problem Public relations specialist – the person who acts as the official spokesperson for the organization Physical Security An important element in network security is restricting physical access to its components. There are various techniques for this including locking doors, security people at access points etc. You should identify the following… Which rooms contain critical systems or data and must be secured Through what means might intruders gain access to these rooms How and to what extent are authorized personnel granted access to these rooms Are authentication methods such as ID cards easy to forge etc. Security in Network Design The optimal way to prevent external security breaches from affecting you LAN is not to connect your LAN to the outside world at all. The next best protection is to restrict access at every point where your LAN connects to the rest of the world. Router Access List – can be used to filter or decline access to a portion of a network for certain devices. Intrusion Detection and Prevention While denying someone access to a section of the network is good, it is better to be able to detect when an attempt has been made and notify security personnel. This can be done using IDS (intrusion detection system) software. One drawback of IDS software is it can detect false positives – i.e. an authorized person who has forgotten his password attempts to logon. Firewalls A firewall is a specialized device, or a computer installed with specialized software, that selectively filters or blocks traffic between networks. A firewall typically involves a combination of hardware and software and may reside between two interconnected private networks. The simplest form of a firewall is a packet filtering firewall, which is a router that examines the header of every packet of data it receives to determine whether that type of packet is authorized to continue to its destination or not. Firewalls can block traffic in and out of a LAN. NOS (Network Operating System) Security Regardless of the operating system, generally every network administrator can implement basic security by restricting what users are authorized to do on a network. Some of the restrictions include things related to Logons – place, time of day, total time logged in, etc Passwords – length, characters used, etc Encryption Encryption is the use of an algorithm to scramble data into a format that can be read only by reversing the algorithm. The purpose of encryption is to keep information private. Many forms of encryption exist and new ways of cracking encryption are continually being invented. The following are some categories of encryption… Key Encryption PGP (Pretty Good Privacy) SSL (Secure Sockets Layer) SSH (Secure Shell) SCP (Secure CoPy) SFTP (Secure File Transfer Protocol) IPSec (Internet Protocol Security) For a detailed explanation on each section refer to pages 596 to 604 of textbook Authentication Protocols Authentication protocols are the rules that computers follow to accomplish authentication. Several types exist and the following are some of the common authentication protocols… RADIUS and TACACS PAP (Password Authentication Protocol) CHAP and MS-CHAP EAP (Extensible Authentication Protocol) 802.1x (EAPoL) Kerberos Wireless Network Security Wireless transmissions are particularly susceptible to eavesdropping. The following are two wireless network security protocols WEP WPA

    Read the article

  • Form, function and complexity in rule processing

    - by Charles Young
    Tim Bass posted on ‘Orwellian Event Processing’. I was involved in a heated exchange in the comments, and he has more recently published a post entitled ‘Disadvantages of Rule-Based Systems (Part 1)’. Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions. I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation of my comments. For me, the ‘red rag’ lay in Tim’s claim that “...rules alone are highly inefficient for most classes of (not simple) problems” and a later paragraph that appears to equate the simplicity of form (‘IF-THEN-ELSE’) with simplicity of function.   It is not the first time Tim has expressed these views and not the first time I have responded to his assertions.   Indeed, Tim has a long history of commenting on the subject of complex event processing (CEP) and, less often, rule processing in ‘robust’ terms, often asserting that very many other people’s opinions on this subject are mistaken.   In turn, I am of the opinion that, certainly in terms of rule processing, which is an area in which I have a specific interest and knowledge, he is often mistaken. There is no simple answer to the fundamental question ‘what is a rule?’ We use the word in a very fluid fashion in English. Likewise, the term ‘rule processing’, as used widely in IT, is equally difficult to define simplistically. The best way to envisage the term is as a ‘centre of gravity’ within a wider domain. That domain contains many other ‘centres of gravity’, including CEP, statistical analytics, neural networks, natural language processing and so much more. Whole communities tend to gravitate towards and build themselves around some of these centres. The term 'rule processing' is associated with many different technology types, various software products, different architectural patterns, the functional capability of many applications and services, etc. There is considerable variation amongst these different technologies, techniques and products. Very broadly, a common theme is their ability to manage certain types of processing and problem solving through declarative, or semi-declarative, statements of propositional logic bound to action-based consequences. It is generally important to be able to decouple these statements from other parts of an overall system or architecture so that they can be managed and deployed independently.  As a centre of gravity, ‘rule processing’ is no island. It exists in the context of a domain of discourse that is, itself, highly interconnected and continuous.   Rule processing does not, for example, exist in splendid isolation to natural language processing.   On the contrary, an on-going theme of rule processing is to find better ways to express rules in natural language and map these to executable forms.   Rule processing does not exist in splendid isolation to CEP.   On the contrary, an event processing agent can reasonably be considered as a rule engine (a theme in ‘Power of Events’ by David Luckham).   Rule processing does not live in splendid isolation to statistical approaches such as Bayesian analytics. On the contrary, rule processing and statistical analytics are highly synergistic.   Rule processing does not even live in splendid isolation to neural networks. For example, significant research has centred on finding ways to translate trained nets into explicit rule sets in order to support forms of validation and facilitate insight into the knowledge stored in those nets. What about simplicity of form?   Many rule processing technologies do indeed use a very simple form (‘If...Then’, ‘When...Do’, etc.)   However, it is a fundamental mistake to equate simplicity of form with simplicity of function.   It is absolutely mistaken to suggest that simplicity of form is a barrier to the efficient handling of complexity.   There are countless real-world examples which serve to disprove that notion.   Indeed, simplicity of form is often the key to handling complexity. Does rule processing offer a ‘one size fits all’. No, of course not.   No serious commentator suggests it does.   Does the design and management of large knowledge bases, expressed as rules, become difficult?   Yes, it can do, but that is true of any large knowledge base, regardless of the form in which knowledge is expressed.   The measure of complexity is not a function of rule set size or rule form.  It tends to be correlated more strongly with the size of the ‘problem space’ (‘search space’) which is something quite different.   Analysis of the problem space and the algorithms we use to search through that space are, of course, the very things we use to derive objective measures of the complexity of a given problem. This is basic computer science and common practice. Sailing a Dreadnaught through the sea of information technology and lobbing shells at some of the islands we encounter along the way does no one any good.   Building bridges and causeways between islands so that the inhabitants can collaborate in open discourse offers hope of real progress.

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28

    - by pinaldave
    This is another common wait type. However, I still frequently see people getting confused with PAGEIOLATCH_X and PAGELATCH_X wait types. Actually, there is a big difference between the two. PAGEIOLATCH is related to IO issues, while PAGELATCH is not related to IO issues but is oftentimes linked to a buffer issue. Before we delve deeper in this interesting topic, first let us understand what Latch is. Latches are internal SQL Server locks which can be described as very lightweight and short-term synchronization objects. Latches are not primarily to protect pages being read from disk into memory. It’s a synchronization object for any in-memory access to any portion of a log or data file.[Updated based on comment of Paul Randal] The difference between locks and latches is that locks seal all the involved resources throughout the duration of the transactions (and other processes will have no access to the object), whereas latches locks the resources during the time when the data is changed. This way, a latch is able to maintain the integrity of the data between storage engine and data cache. A latch is a short-living lock that is put on resources on buffer cache and in the physical disk when data is moved in either directions. As soon as the data is moved, the latch is released. Now, let us understand the wait stat type  related to latches. From Book On-Line: PAGELATCH_DT Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Destroy mode. PAGELATCH_EX Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Exclusive mode. PAGELATCH_KP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Keep mode. PAGELATCH_SH Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Shared mode. PAGELATCH_UP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Update mode. PAGELATCH_X Explanation: When there is a contention of access of the in-memory pages, this wait type shows up. It is quite possible that some of the pages in the memory are of very high demand. For the SQL Server to access them and put a latch on the pages, it will have to wait. This wait type is usually created at the same time. Additionally, it is commonly visible when the TempDB has higher contention as well. If there are indexes that are heavily used, contention can be created as well, leading to this wait type. Reducing PAGELATCH_X wait: The following counters are useful to understand the status of the PAGELATCH: Average Latch Wait Time (ms): The wait time for latch requests that have to wait. Latch Waits/sec: This is the number of latch requests that could not be granted immediately. Total Latch Wait Time (ms): This is the total latch wait time for latch requests in the last second. If there is TempDB contention, I suggest that you read the blog post of Robert Davis right away. He has written an excellent blog post regarding how to find out TempDB contention. The same blog post explains the terms in the allocation of GAM, SGAM and PFS. If there was a TempDB contention, Paul Randal explains the optimal settings for the TempDB in his misconceptions series. Trace Flag 1118 can be useful but use it very carefully. I totally understand that this blog post is not as clear as my other blog posts. I suggest if this wait stats is on one of your higher wait type. Do leave a comment or send me an email and I will get back to you with my solution for your situation. May the looking at all other wait stats and types together become effective as this wait type can help suggest proper bottleneck in your system. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

    - by fip
    The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites: How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic? How to make sure the messages are evenly distributed (load balanced) to SOA cluster members? Here we will walk through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication. 2. The typical configuration In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics. Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps: Step A. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) Here are more details of each step: Step A. Configure JMS Topic to be UDD and PDT, You do this at the WebLogic cluster that house the JMS Topic. You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively Step B: Configure ServerProperties of JCA Connection Factory You do this step at the SOA cluster. This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client". When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list: ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false You can refer to Chapter 8.4.10 Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties. Please note: Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID. The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory. Here is the example setting: Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber': <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata"> <connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/> <endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message"> <activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"> <property name="DurableSubscriber" value="MySubscriberID-1"/> <property name="PayloadType" value="TextMessage"/> <property name="UseMessageListener" value="false"/> <property name="DestinationName" value="jms/MyTestUDDTopic"/> </activation-spec> </endpoint-activation> </adapter-config> You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project. 2.The "atypical" configurations: For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here: Configuration A: The JMS Topic is NOT PDT In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded' in the adapter's *.jca file, to filter out those "replicated" messages. Configuration B. The ClientIDPolicy=RESTRICTED In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter. References: Managing Durable Subscription WebLogic JMS Partitioned Distributed Topics and Shared Subscriptions JMS Troubleshooting: Configuring JMS Message Logging: Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API

    Read the article

  • How to find domain registrar and DNS hosting with good DNSSEC support?

    - by rsp
    Simplified problem I want to buy a domain and make a website that is fully secured with DNSSEC. Background I've been hearing about the insecurity of DNS for years. I've watched all of the talks by Dan Kaminsky and others from DNS exploits to The future of DNS Security Panel. I knew that using DNS without security is a disaster waiting to happen. I followed the development of the DNSSEC standard. I celebrated the key signing ceremony. Everything was on the right track to finally have a secure DNS system in place. And now more than 2 years later I wanted to just do what everyone said I should do: use DNSSEC for a new domain. So I need a domain registrar and a DNS hosting service that supports DNSSEC. Surprisingly it is not that easy to even find out who does support DNSSEC. It was actually much easier to find info on DNSSEC two years ago when everyone was going to support DNSSEC Real Soon Now but now years passed and I hardly see any progress done. I just hope that I was just looking in the wrong places and someone here will explain all of the doubts. I hope that other people who want to have a secure website will also find this question useful. What is needed registrar and DNS servers with full DNSSEC support for .com domains What is not needed IPv6 support Web hosting anything more What I found out so far Go Daddy offers Premium DNS service for additional $36 per year that lets you "Secure up to 5 domains with DNSSEC". easyDNS has DNSSEC available in Beta across all service levels (you need to enable the "beta" flag in configuration) but it doesn't seem to be production ready and judging from the lack of updates it isn't a feature of highest priority (the last update from March 2011 on the easyDNS blog). Name.com - according to The Register (US domain registrar does IPv6, DNSSEC) it has DNSSEC support since 2010 but right now (October 2012) I couldn't find anything related to DNSSEC on their website. Dynadot that is very often recommended doesn't support DNSSEC Namecheap that is also often recommended doesn't support DNSSEC. The support answer from 2011 suggested that it was being added but in 2012 still no ETA is given to customers. DynDNS was supposed to support DNSSEC, I found a link explaining DNSSEC support but it gives 404 Not Found page and offers a search box - when searching for DNSSEC I get "No results were found for your query." GKG was recommended online for DNSSEC support but it's hard to find any information on the level of DNSSEC support - there is a brief explanation on what is DNSSEC and how to sign Delegation Signer records in their FAQ but no information about the level of actual support can be found. Ask Slashdot: Which Registrars Support DNSSEC? from July 2011 - Answers list Go Daddy, DynDNS, GKG, Name.com as registrars that support DNSSEC but: see above. Related questions How to find web hosting that meets my requirements? What is needed to add DNSSEC to my site? DNS hosting better managed by Domain provider or Hosting provider? Registrar with good security, DNS hosting, and DNSSEC and IPv6 resolvers? In no. 1 no one is ever mentioning DNS at all. In no. 2 answers only mention the .se TLD, there are very few answers and they seem very outdated. In no. 3 one answer says "On projects that demand higher security, I might look for a web host that supports DNSSEC" but no more information is provided. The only relevant answers are in no. 4 where easyDNS is recommended by someone who has never used them personally. Meanwhile, as of October 2012, the support of DNSSEC is described as "in beta" on the easyDNS feature list. Another one recommends SiteGround but searching their site for DNSSEC returns no results. Other answers recommend web hosting providers that don't meet the requirement of DNSSEC support. Also the question mentioned above lists 9 very specific requirements other than only DNSSEC (like eg. HTTP-only login cookies, two-factor authentications, no DNS record limits, DNS statistics of queries/day, audit trails etc.) which might have excluded many possible recommendations if one is only interested in DNSSEC support. Conclusions I thought that by the end of 2012 the support of DNSSEC among domain registrars and DNS providers would be nearly universal. I am shocked that the support seems virtually nonexistent. Is this a result of some serious problems with the DNSSEC adoption? Or is it just not a hot topic and no one bothers anymore? According to the DNSSEC Scoreboard roughly about 0.1% of .com domains support DNSSEC. Could that be caused by the lack of DNSSEC support among registrars and DNS providers, is the information too hard to find or maybe no one cares? There is even no "dnssec" tag here. Questions The information is surprisingly hard to find. That is why I am asking for first-hand experience and personal recommendations. Has anyone here actually set up a website with DNSSEC, from the domain registration to the configuration of DNS servers? Can anyone recommend any of the registrars mentioned above? Can anyone recommend any registrar not mentioned above?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >