Search Results

Search found 13145 results on 526 pages for 'little ancient forest kami'.

Page 173/526 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • What Counts for A DBA: Observant

    - by drsql
    When walking up to the building where I work, I can see CCTV cameras placed here and there for monitoring access to the building. We are required to wear authorization badges which could be checked at any time. Do we have enemies?  Of course! No one is 100% safe; even if your life is a fairy tale, there is always a witch with an apple waiting to snack you into a thousand years of slumber (or at least so I recollect from elementary school.) Even Little Bo Peep had to keep a wary lookout.    We nerdy types (or maybe it was just me?) generally learned on the school playground to keep an eye open for unprovoked attack from simpler, but more muscular souls, and take steps to avoid messy confrontations well in advance. After we’d apprehensively negotiated adulthood with varying degrees of success, these skills of watching for danger, and avoiding it,  translated quite well to the technical careers so many of us were destined for. And nowhere else is this talent for watching out for irrational malevolence so appropriate as in a career as a production DBA.   It isn’t always active malevolence that the DBA needs to watch out for, but the even scarier quirks of common humanity.  A large number of the issues that occur in the enterprise happen just randomly or even just one time ever in a spurious manner, like in the case where a person decided to download the entire MSDN library of software, cross join every non-indexed billion row table together, and simultaneously stream the HD feed of 5 different sporting events, making the network access slow while the corporate online sales just started. The decent DBA team, like the going, gets tough under such circumstances. They spring into action, checking all of the sources of active information, observes the issue is no longer happening now, figures that either it wasn’t the database’s fault and that the reboot of the whatever device on the network fixed the problem.  This sort of reactive support is good, and will be the initial reaction of even excellent DBAs, but it is not the end of the story if you really want to know what happened and avoid getting called again when it isn’t even your fault.   When fires start raging within the corporate software forest, the DBA’s instinct is to actively find a way to douse the flames and get back to having no one in the company have any idea who they are.  Even better for them is to find a way of killing a potential problem while the fires are small, long before they can be classified as raging. The observant DBA will have already been monitoring the server environment for months in advance.  Most troubles, such as disk space and security intrusions, can be predicted and dealt with by alerting systems, whereas other trouble can come out of the blue and requires a skill of observing ongoing conditions and noticing inexplicable changes that could signal an emerging problem.  You can’t automate the DBA, because the bankable skill of a DBA is in detecting the early signs of unexpected problems, and working out how to deal with them before anyone else notices them.    To achieve this, the DBA will check the situation as it is currently happening,  and in many cases is likely to have been the person who submitted the problem to the level 1 support person in the first place, just to let the support team know of impending issues (always well received, I tell you what!). Database and host computer settings, configurations, and even critical data might be profiled and captured for later comparisons. He’ll use Monitoring tools, built-in, commercial (Not to be too crassly commercial or anything, but there is one such tool is SQL Monitor) and lots of homebrew monitoring tools to monitor for problems and changes in the server environment.   You will know that you have it right when a support call comes in and you can look at your monitoring tools and quickly respond that “response time is well within the normal range, the query that supports the failing interface works perfectly and has actually only been called 67% as often as normal, so I am more than willing to help diagnose the problem, but it isn’t the database server’s fault and is probably a client or networking slowdown causing the interface to be used less frequently than normal.” And that is the best thing for any DBA to observe…

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • Bash can't start a programme that's there and has all the right permissions

    - by Rory
    This is a gentoo server. There's a programme prog that can't execute. (Yes the execute permission is set) About the file $ ls prog $ ./prog bash: ./prog: No such file or directory $ file prog prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped $ pwd /usr/local/bin $ /usr/local/bin/prog bash: /usr/local/bin/prog: No such file or directory $ less prog | head ELF Header: Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Intel 80386 Version: 0x1 I have a fancy less, to show that it's an actual executable, here's some more data: $ xxd prog |head 0000000: 7f45 4c46 0101 0100 0000 0000 0000 0000 .ELF............ 0000010: 0200 0300 0100 0000 c092 0408 3400 0000 ............4... 0000020: 0401 0a00 0000 0000 3400 2000 0700 2800 ........4. ...(. 0000030: 2600 2300 0600 0000 3400 0000 3480 0408 &.#.....4...4... 0000040: 3480 0408 e000 0000 e000 0000 0500 0000 4............... 0000050: 0400 0000 0300 0000 1401 0000 1481 0408 ................ 0000060: 1481 0408 1300 0000 1300 0000 0400 0000 ................ 0000070: 0100 0000 0100 0000 0000 0000 0080 0408 ................ 0000080: 0080 0408 21f1 0500 21f1 0500 0500 0000 ....!...!....... 0000090: 0010 0000 0100 0000 40f1 0500 4081 0a08 ........@...@... and $ ls -l prog -rwxrwxr-x 1 1000 devs 725706 Aug 6 2007 prog $ ldd prog not a dynamic executable $ strace ./prog 1249403877.639076 execve("./prog", ["./prog"], [/* 27 vars */]) = -1 ENOENT (No such file or directory) 1249403877.640645 dup(2) = 3 1249403877.640875 fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) 1249403877.641143 fstat(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0 1249403877.641484 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b3b8954a000 1249403877.641747 lseek(3, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) 1249403877.642045 write(3, "strace: exec: No such file or dir"..., 40strace: exec: No such file or directory ) = 40 1249403877.642324 close(3) = 0 1249403877.642531 munmap(0x2b3b8954a000, 4096) = 0 1249403877.642735 exit_group(1) = ? About the server FTR the server is a xen domU, and the programme is a closed source linux application. This VM is a copy of another VM that has the same root filesystem (including this programme), that works fine. I've tried all the above as root and same problem. Did I mention the root filesystem is mounted over NFS. However it's mounted 'defaults,nosuid', which should include execute. Also I am able to run many other programmes from that mounted drive /proc/cpuinfo: processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 1 cpu MHz : 2992.692 cache size : 1024 KB fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl cid cx16 xtpr bogmips : 5989.55 clflush size : 64 cache_alignment : 128 address sizes : 36 bits physical, 48 bits virtual power management: Example of a file that I can run I can run other programmes on that mounted filesystem on that server. For example: $ ls -l ls -rwxr-xr-x 1 root root 105576 Jul 25 17:14 ls $ file ls ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped $ ./ls attr cat cut echo getfacl ln more ... (you get the idea) ... rmdir sort tty $ less ls | head ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Advanced Micro Devices X86-64 Version: 0x1

    Read the article

  • ESXI guests not using available CPU resources

    - by Alan M
    I have a VMWare ESXI 5.0.0 (it's a bit old, I know) host with three guest VMs on it. For reasons unknown, the guests will not use much of the availble CPU resources. I have all three guests in a single pool, with the hosts all configured to use the same amount of resource shares, so they're basically 33% each. The three guests are basically identically configured as far as their VM resources go. So the problem is, even when the guests are performing what should be very 'busy' activitites, such as at bootup, the actual host CPU consumed is something tiny, like 33mhz, when seen via vSphere console's "Virtual Machines" tab when viewing properties for the pool. And of course, the performance of the guest VMs is terrible. The host has plenty of CPU to spare. I've tried tinkering with individual guest VM resource settings; cranking up the reservation, etc. No matter. The guests just refuse to make use of the abundant CPU available to them, and insist on using a sliver of the available resources. Any suggestions? Update after reading various comments below Per the suggestions below, I did remove the guests from the application pool; this didn't make any difference. I do understand that the guests are not going to consume resources they do not need. I have tried to do a remote perfmon on the guest which is experiencing long boot times, but I cannot connect to the guest remotely with perfmon (guest is w2k8r2 server). Host graphs for CPU, Mem, Disk are basically flatlining; very little demand. Same is true for the guest stats; while the guest itself seems to be crawling, the guest resource graphing shows very little activity across CPU, Mem, Disk. Host is a Dell PowerEdge 2900, has 2 physical CPU,20gb RAM. (it's a test/dev environment using surplus gear) Guest1 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest2 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest3 has: VM ver. 7, 1vCPU, 2gb RAM, 2tb storage which lives on a RAID-5 ISCSI NAS box Perhaps I am making a false assumption that if a guest has a demand for CPU (e.g. Windows Task Manager shows 100% CPU), the host would supply the guest with more CPU (mem, disk) on demand. Another Update After checking the stats, it would appear that the host is indeed not busy at all, neither is the guest. I believe I have a good idea on the issue, though; a messed-up VMWare Tools install. The guest has VMware Tools on it, but the host says it does not. VMWare Tools refuses to uninstall, refuses to be upgraded, refuses to be recognized. While I cannot say with authority, this would appear to be something worth investigation. I do not know the origin of the guest itself, nor the specifics on the original VMWare Tools install. Following various bits of googling, I did come up with a few suggestions that went nowhere. To that end, I was going to delete this question, but was prompted not to do so since so many folks answered. My suspicion right now is; the problem truly is the guest; the guest is not making a demand on the host, and as a natural result, the host is treating the guest accordingly. My Final Update I am 99% certain the guest VM had something fundamentally wrong with it re VMWare Tools. I created a clone of a different VM with a near-identical OS config, but a properly working install of VMWare tools. The guest runs just great, and takes up it's allotment of resources when it needs to; e.g. it eats up about 850mhz CPU during startup, then ticks down to idle once the guest OS is stable.

    Read the article

  • Add a small RAID card? Will it help overall stability and performance of my nine hard drives?

    - by Ray
    Hi, Will I get any extra genuine added performance and RAID stability if I insert a basic RAID card into a PCI-E x1 slot? I am considering the Adaptec 1220SA - 2 port SATA , pci-express (1x) , raid 0/1. Ok it only supports two SATA drives. Purpose is to help support the eight internal hard drives (1TB each), a DVD drive and an external e-SATA connected 2TB hard drive - by dealing with two of the internal hard drives. My current configuration of eight internal 1TB Barracuda (7200.12) SATA hard drives, one external 2TB SATA Western Digital Green Drive (e-SATA) and one DVD drive can already be supported by the Intel P55 & JMicron controllers on the ASUS motherboard : the Intel P55 (controls six HDD; configured as three x RAID 1), and the JMicron (controls two HDD as one RAID 1, as well as the DVD drive and the external SATA drive via the motherboard's e-SATA port (controlled by the JMicron)). Bigger picture details : I have an ASUS motherboard designed for the LGA1156 type processor and it includes the Intel P55 Express Chipset and JMicron. I am using the Intel Core i7-870 processor, and have 8GB DDR3 (1333) memory (four x 2GB Corsair DIMMs). Enough overall power. The power supply is more than sufficicient for the system. Corsair AX850. The system will never need the full 850 watts (future : second graphics card). The RAID card would provide hardware RAID 1 for two of the eight intrnal drives. It would either reduce the load on : the Intel P55 firmware RAID support, or replace the JMicron controller's RAID 1 set. I am busy installing the above configuration using Windows 7 Ultimate 64-bit as the OS. The RAID card is a last minute addition to the plan. Is it worth spending the extra R700 - R900 on the Adaptec 1220SA, or equivalent RAID card? I cannot afford to spend yet another R2000 - R3000 on a RAID card that would support many SATA2 hard drives, with a better RAID, example the RAID 5. My Issue & assumption : I am trusting that the Intel P55 chipset can properly handle six drives, configured as three * RAID 1. I am assuming that the JMicron can handle, using its RED SATA ports, one RAID-1 (two HDDs). The DVD drive connects to the JMicron optical SATA port 1 (white port 1). White port 2 is not used. The e-SATA connection is from the JMicron straight to, and through the motherboard - to an on-board (rear panel) e-SATA port. Am I being a little hopeful in only using the on-board Intel P55 and the JMicron? Is it a waste of money to install a RAID card that handles two SATA2 drives? OR Is it wisdom to take the pressure a little off the Intel P55? Obviously I am interested in data security, hence RAID 1, not RAID Zero. RAID 5 would be nice. The CPU, Intel Core i7-870 will provide the clout. Context to nine drives : I am using virtualisation with Windows 7 Ultimate. Bootable VMs. The operating system gets a mirror. Loaded apps gets a mirror. The current design data is kept in another mirror and Another mirror is back-up one and / or VM territory. Then the external 2TB drive (via e-SATA) is the next layer of data security and then finally, I use off-site data security. Thanks.

    Read the article

  • Where should I go with hosting my site: VPS, GAE, another option?

    - by Jonathan Hayward
    My website, http://JonathansCorner.com/, began life before 1994 as www.imsa.edu/~jhayward/ and has been through various iterations and improvements to content, HTML, and the like, but remains a literature site that is from a web administrator's perspective fairly simple and primitive: a fair amount of static HTML and supporting files, a little bit of CGI and URI rewriting, .htaccess files providing Expires: headers and the like. An associated site demoes various CGI scripts that fall under the category of "and other creations"; the site as a whole has the purpose of sharing my creative works, and so far a fairly rudimentary use of Apache functionality, supported by Unix tools to, for instance, update RSS feed and the "starting point" link on the home page, has served that purpose fairly well. I looked around here on web hosting, and found the note on web host reccommendations as a good note for "What are some of people's favorite web hosts overall," but I wanted to ask a more focused question of "What are the best web hosts for criteria XYZ:" I am looking at a VPS so I will have root, be able to install stuff and edit Apache's config files etc., running Gentoo or other Linux, BSD, or the like. I would like a system that is secure enough that the host's vulnerabilities are mostly the ones that come along with what I am trying to do: that is, I won't be trying to administer and secure an ancient Linux like some have complained about at 1and1. I would like good uptime/reliability and competent support staff: if the level 1 help desk is going to tell me to go to "My Computer" on a Linux box, I'd like to be able to get past them. Ideally I would like a site hosted within some place that will have low latency for U.S. visitors in particular. I would like a hosting solution that will be with a stable business, one that will probably be around, and one unlikely to vanish without warning. With those things specified, I would be interested in knowing what are the less expensive options. (I expect that some of the things I've specified will knock out all of the cheapest options, but I'm still interested in price.) With all that stated, I'd like to back up a bit and look at whether I am asking the right question. I am concerned that the above is a very good way of asking, "How can I keep my site in line with the wave of the past?" I am wondering if it might be specifically wiser to look to adapt my site to newer technologies instead of trying to keep it on older technologies. For instance, while I would hardly portray my site as a way to show off the full power of Google App Engine, the main site at least should be a straightforward port if I were to do that. And beyond Google App Engine, my knowledge of cloud solutions is basic. If it is a better and more future-proof solution to port my site to another kind of solution, I would be interested in knowing where those future-proof solutions lie. So I would be interested in wisdom. If the question I asked in detail is still a good question to be asking, what would people suggest? Or if I should seriously consider porting my site to a newer basic option, what should I try there? Any thoughts would be appreciated.

    Read the article

  • Windows 7 sometimes boots in VGA mode

    - by TuxRug
    I have an Asus G50VT-x5 laptop with nVidia GeForce9800M-GS graphics. Normally, Windows boots normally, but about 20% of the time (rough estimate), it will boot with the fallback VGA driver, maxing out at 800x600 with no Aero. I've checked the system logs and there is nothing indicating an error loading the nVidia driver. It even specifies in the logs that the Nvidia Display Driver service started successfully, even though it has booted in safe graphics mode. This has been happening for a while, but it's happening a little more often now than it was before. Since the first time my system exhibited this behavior, I have updated my graphics driver a handful of times. I used System Information for Windows to check for problems there, but the only thing that stood out was the following: Core Temperature 4486449 °C (8075639 °F) Shaders Temperature 1171513530 °C (2108724330 °F) I know this reading is incorrect, because my laptop is nowhere near the surface of the sun and my desk has not burst into flames. When it's opererating normally, I get a sane reading like [Core Temperature 58 °C (136 °F)] with no Shaders Temperature listed. All I have to do to resolve the issue is reboot. I have seen no stability issues with the graphics or anything else. A long time ago, I had an issue with this computer where my framerate would suddenly drop during a 3D game from 40fps to <1fps, but after looking at the temperature readout immediately after quitting a game, I removed the bottom panel and blew the dust out of the vent and heatsink. Since then I have no drops in framerate under any situation. I have uploaded a zip containing the SIW reports for when the problem is occurring and when the computer is operating normally. I don't have a paid account so it can only be downloaded 10 times, so please only download the reports if you think you can use them. If you try to download the reports and they are no longer available, please comment and I will re-upload them. If you want to look at the files, they are on Rapidshare. EDIT It happened again, and I looked a little deeper into the System logs. When this happens, there are a lot of errors about other device drivers unable to start. All of these errors are for PnP drivers. Also, my USB keyboard and mouse take a few moments before they actually start working, although this happens sometimes the first normal boot as well. I am quite sure this is related, so I am adding the pnp tag. Also, CHKDSK will not run on boot. Even if a check is scheduled or a volume is manually set as dirty, CHKDSK will be skipped entirely, not even leaving an entry in the System logs. I tried running CHKNTFS /D, which did not work. I then manually changed my HKLM\System\CurrentControlSet\Control\Session Manager BootExecute value to the default listed on Microsoft's website. That did not work either. I ended up booting to repair mode and running CHKDSK there, which found a number of minor inconsistencies on my system drive, but none on my data drive. I have no idea if this is related. Some more information for those who don't download my SIW report file: Antivirus and Firewall are ESET Smart Security I have three different virutalization programs installed: VMware Player, Windows Virtual PC, and VirtualBox. The network adapters for these show up in the log of failed device starts. EDIT 2 I tried running sfc /scannow, which reported that it found corrupted files that could not be fixed. The CBS log is extremely cryptic. I tried booting to my install disk, launching repair mode, and doing an offline sfc from there, which produced the same result.

    Read the article

  • Centos 7. Freeradius fails to start on boot

    - by Alex
    I was messing around with FreeRADIUS and MySQL (MariaDB) and it seems FreeRADIUS service can't start properly on startup. But it starts fine using root user or in debug mode (radiusd -X) and works just fine! Debug mode shows no errors. systemctl command shows that radiusd.service has failed to start. /var/log/messages output: Aug 21 15:52:29 nexus-test systemd: Starting The Apache HTTP Server... Aug 21 15:52:29 nexus-test systemd: Starting MariaDB database server... Aug 21 15:52:29 nexus-test systemd: Starting FreeRADIUS high performance RADIUS server.... Aug 21 15:52:29 nexus-test systemd: Started OpenSSH server daemon. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Aug 21 15:52:30 nexus-test systemd: Started Postfix Mail Transport Agent. Aug 21 15:52:30 nexus-test avahi-daemon[604]: Registering new address record for fe80::250:56ff:fe85:e4af on eth0.*. Aug 21 15:52:30 nexus-test systemd: radiusd.service: control process exited, code=exited status=1 Aug 21 15:52:30 nexus-test systemd: Failed to start FreeRADIUS high performance RADIUS server.. Aug 21 15:52:30 nexus-test systemd: Unit radiusd.service entered failed state. Aug 21 15:52:31 nexus-test kdumpctl: kexec: loaded kdump kernel Aug 21 15:52:31 nexus-test kdumpctl: Starting kdump: [OK] Aug 21 15:52:31 nexus-test systemd: Started Crash recovery kernel arming. Aug 21 15:52:31 nexus-test systemd: Started The Apache HTTP Server. Aug 21 15:52:31 nexus-test systemd: Started MariaDB database server. /var/log/radius/radius.log output: Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Attempting to connect to database "radius" Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Opening additional connection (0) Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Couldn't connect socket to MySQL server radius@localhost:radius Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Mysql error 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)' Thu Aug 21 15:24:16 2014 : Error: rlm_sql (sql): Opening connection failed (0) Thu Aug 21 15:24:16 2014 : Error: /etc/raddb/mods-enabled/sql[47]: Instantiation failed for module "sql" After seeing this I tried to replicate the problem, killed mariadb.service and started to run debug mode again. And it spits out the same problem as in the radius.log. I tried disabling iptables and firewalld and rebooting, but no luck: systemctl disable iptables systemctl disable firewalld So maybe the problem is in the process startup order or delay of some kind. Maybe FreeRADIUS's SQL module can't connect to not yet started MariaDB? If it, how can I fix this? In earlier versions of RHEL/CENTOS I know you easily see service start order in like rc.d or stuff, now IDK. I am new to this fancy "systemd", "systemctl", "firewalld" stuff Centos 7 introduced so sorry I'm a little bit confused. Also this new FreeRADIUS 3 structure... PS. MariaDB is enabled on startup, credentials in FR DB configuration are correct A little update: cat /etc/systemd/system/multi-user.target.wants/radiusd.service output: [Unit] Description=FreeRADIUS high performance RADIUS server. After=syslog.target network.target [Service] Type=forking PIDFile=/var/run/radiusd/radiusd.pid ExecStartPre=-/bin/chown -R radiusd.radiusd /var/run/radiusd ExecStartPre=/usr/sbin/radiusd -C ExecStart=/usr/sbin/radiusd -d /etc/raddb ExecReload=/usr/sbin/radiusd -C ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target

    Read the article

  • Why does my computer run slowly and freeze sometimes?

    - by Brae
    EDIT I disabled sound in the BIOS, rebooted and it hung. I removed the (previously) faulty HDD, rebooted and it hung. I have managed to get my Realtek audio manager open again after its mysterious disappearance. Subsequently my microphone is now working again, to fix it I had to uninstall audio drivers, disable audio in BIOS, install audio drivers, enable audio in BIOS. Access via network (with faulty HDD in) seems to not be triggering hangs at the moment. I think with the sound problem fixed it might play a little nicer, but I think its still going to hang. If it does, then I'm fairly sure its been narrowed down to the mobo. EDIT Pretty convinced my motherboard is the culprit, because nothing else seems to have any obvious problems (bar the hard drive, which the PC still hangs without it being plugged in) So thank you all for helping, once I get more rep ill up a few of the answers. My PC is doing some weird things... Sometimes when I open up a program, lets say Adobe Photoshop, it will load everything normally, nice and quick no problems and I can use it fine. Other times its a little odd, and it loads the program as if it's only using half of the CPU. It's pretty obvious when it does it, normally the loading screen skims past, but when it does this weird load you see it slowly tick though each thing, and the program itself becomes incredibly slow. Even Google Chrome does it sometimes. Yet when I exit and reopen the program without doing anything else, it typically opens fine without lagging. I think this problem is probably a symptom of something bigger, because of other problems I'm having. Random hangs; no matter what I have open or what I'm doing. My PC will sort of freeze up for a few seconds. If music is playing it will either loop the last second or two, or it will buzz. This only lasts for a few seconds then it returns to normal without having to restart. During this time all programs lock up and freeze, and the mouse and keyboard are useless. I am also having a weird issue with my audio jacks, without touching my PC at all, sometimes I will see a popup saying that I have unplugged something or plugged something in, neither of which has actually happened. Pretty sure this is cause by the motherboard. I recently had a 'Pink Screen of Death' (yes pink) which pointed to my audio drivers. The lockups seem to happen with some consistency when someone is accessing my shared files via my home LAN. Which leads me to believe either one of my hard drives is acting up or more likely the controller. One of my drives had a bit of a crash before, I used Spinrite and managed to recover my stuff and now the drive appears to be working okay. This is possibly adding to the problems. My best guess is something has gone wrong with my motherboard, possibly a power issue or a chip has died, I really don't know. So what I would like to know is: Have I have missed some obvious diagnostic to help figure out what it is, or should I just bite the bullet and assume its my motherboard acting up and buy a new system? dxdiag[64-bit] - http://pastebin.com/G30kb2TL PSOD (minidump) - http://pastebin.com/aZsv0H56 HWiNFO64 (system info / specs) - http://pastebin.com/X6h3K8g6

    Read the article

  • Alias wordpress folder from within another website

    - by Bretticus
    I have a little dilemma. I wrote a custom PHP MVC framework and built a CMS on top of it. I decided to give nginx+fpm a whirl too. Which is the root of my dilemma. I was asked to incorporate a wordpress blog into my website (yah.) It has much content and it's not feasible in the short amount of time I have to bring all the content into my CMS. Because of using Apache for years, I'm, admittedly, a little lost using nginx. My website has the file path: /opt/directories/mysite/public/ The wordpress files are located at: /opt/directories/mysite/news/ I know I just need to setup location(s) that targets /news[/*] and then forces all matching URI's to the index.php within. Can someone point me in the right direction perhaps? My configuration is below: server { listen 80; server_name staging.mysite.com index index.php; root /opt/directories/mysite/public; access_log /var/log/nginx/mysite/access.log; error_log /var/log/nginx/mysite/error.log; add_header X-NodeName directory01; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { try_files $uri $uri/ /index.php?route=$uri&$args; } location ~ /news { try_files $uri $uri/ @news; } location @news { fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_split_path_info ^(/news)(/.*)$; fastcgi_param SCRIPT_FILENAME /opt/directories/mysite/news/index.php; fastcgi_param PATH_INFO $fastcgi_path_info; } include fastcgi_params; include php.conf; location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } My php.conf file: location ~ \.php { fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_pass unix:/tmp/php-fpm.sock; # If you must use PATH_INFO and PATH_TRANSLATED then add # the following within your location block above # (make sure $ does not exist after \.php or /index.php/some/path/ will not match): #fastcgi_split_path_info ^(.+\.php)(/.+)$; #fastcgi_param PATH_INFO $fastcgi_path_info; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } fastcgi_params file: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; Thanks, in large part, to @Kromey, I have adjusted my location /news/ but I am still not getting the desired result. I was able to learn to tack a ~ my /news location as I discovered that my php location was being matched first. With this setup, I now get a 200 status, but the page is blank. Any ideas?

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • In Linux, what's the best way to delegate administration responsibilities, like for Apache, a database, or some other application?

    - by Andrew Banks
    In Linux, what's the best way to delegate administration responsibilities for Apache and other "applications"? File permissions? Sudo? A mix of both? Something else? At work we have two tiers of "administrators" Operating system administrators. These are your run-of-the-mill "server administrators." They are responsible for just the operating system. Application administrators. The people who build the web site. This includes not only writing the SQL, PHP, and HTML, but also setting up and running Apache and PostgreSQL or MySQL. The aforementioned OS admins will install this stuff, but it's mainly up to the app admins to edit all the config files, start and stop processes when needed, and so on. I am one of the app admins. This is different than what I am used to. I used to just write code. The sysadmin took care not only of the OS but also installing, setting up, and keeping up the server software. But he left. Now I'm in charge of setting up Apache and the database. The new sysadmins say they just handle the operating system. It's no problem. I welcome learning new stuff. But there is a learning curve, even for the OS admins. Apache, by default, seems to be set up for administration by root directly. All the config files and scripts are 644 and owned by root:root. I'm not given the root password, naturally, so the OS admins must somehow give my ordinary OS user account all the rights necessary to edit Apache's config files, start and stop it, read its log files, and so on. Right now they're using a mix of: (1) giving me certain sudo rights, (2) adding me to certain groups, and (3) changing the file permissions of various directories, to make them writable by one of the groups I'm in. This never goes smoothly. There's always a back-and-forth between me and the sysadmins. They say it's ready. Then I try certain things, and half of them I still can't do. So they make some more changes. Then finally I seem to be independent and can administer Apache and the database without pestering them anymore. It's the sheer complication and amount of changes that make me uncomfortable. Even though it finally works, more or less, it seems hackneyed. I feel like we're doing it wrong. It seems like the makers of the software would have anticipated this scenario (someone other than root administering it) and have a clean two- or three-step program to delegate responsibility to me. But it feels like we are really chewing up the filesystem and making it far and away from the default set-up. Any suggestions? Are we doing it the recommended way? P.S. For PostgreSQL it seems a little better. Its files are owned by a system user named postgres. So giving me the right to run sudo su - postgres gives me just about everything. I'm just now getting into MySQL, but it seems to be set up similarly. But it seems a little weird doing all my work as another user.

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • How do I make an external hard drive keep the same drive letter permanently?

    - by andygrunt
    I have a desktop PC (2002 vintage) running Windows XP that I turn on about 2 or 3 times per week. I have a mains powered 250Gb Western Digital hard disk connected to it via USB. I always turn the hard disk on before the PC so it's up and running as the PC boots. When I first connected the external hard disk, the PC assigned it a letter ('i' if it matters) and I've installed software to it, created shortcuts to various files and folders on the disk using that letter. For years everything was fine then I would boot the PC and the hard disk was assigned a different letter. I'd then have to go into 'my computer/manage/disk management' and manually change the letter back to 'i'. If I then rebooted the PC, the hard disk would usually still be 'i' but after the next reboot would be some other random letter and I have to manually change it back to 'i'. This would go on for some time then there'd be periods when the it would always be 'i' then, for no apparent reason (no new devices added, for example), the drive letter would start changing again. At the moment it's in random drive letter mood so I thought I'd ask the following question... How do I assign the external hard disk to be 'i' permanently? Answer: Thanks Molly that seems to have done the trick (after a little fiddling) - slightly disappointed there wasn't a way to do it within Windows without installing something else though. For anyone else trying this, it wasn't completely straightforward so here's what happened with me. I installed USBDLM as per the instructions on its website. I guessed that I had to assign the first USB letter to i so replaced the 'Letter1=' lines to 'Letter=I' in the ini file. To test it, I rebooted the PC only to find it came back up with the display set to 640x480 in 16 colours. After some investigation, I re-installed the display drivers and rebooted and set the display back to its usual setting. The external hard disk now gets set to 'i' but I found I had to re-apply sharing status to it so it was seen from my laptop which is on the same network. The end result of all this is that it now does what I wanted although it does act as though the hard drive has just been plugged in a few seconds after the Windows desktop appears i.e. the little box appears with a progress bar as it searches through the contents of the 'new' hard drive and I eventually get a dialogue box saying 'This disk or device contains more than one type of content. What do you want Windows to do?' and lists options such as play media files, print the pictures or open folder to view the files. This is a tiny pain I wish didn't happen but not exactly a huge price to pay. Other than that - it seems to work fine :) Looks like a spoke too soon... Every time I reboot, I have to re-share the 'i' drive (which I didn't have to do before) so it can be seen by my laptop on the same network. Any ideas how to make that permanent?

    Read the article

  • Network config / gear question

    - by mcgee1234
    I have been tasked with setting up a fairly straightforward rack in a data center (we do not even need a whole rack, but this is the smallest allotment available). In a nutshell, 4 to 6 servers need to be able to reach 2 (maybe 3) vendors. The servers needs to be reachable over the internet. A little more detail - the networks the servers need to reach are inside of the data center, and are "trusted". Connections to these networks will be achieved through intra data center cross connects. It is kind of like a manufacturing line where we receive data from one vendor (burst-able up to 200 Mbits), churn through it on the servers, and then send out data to another vendor (bursts up to 20 Mbits). This series of events is very latency sensitive, so much so that it is common practice not to use NAT or a firewall on these segments (or so I hear). To reach the servers over the internet, I plan to use a site to site VPN. (This part is only relevant as far as hardware selection goes). I have 2 configurations in mind: Cisco 2911 (2921) (with the additional wan ports module) and a layer 2 switch - in this scenario, I would use the router also for VPN. Cisco 3560 layer 3 switch to interconnect the networks inside of the data center and an ASA 5510 (which is total overkill, but the 5505 is not rack mountable) as a firewall for the Wan side (internet) and VPN. I envision the setup to be as follows: Internet - ASA - 3560 Vendors - 3560 - Servers The general idea is that the ASA acts as a firewall and VPN device and the 3560 does all the heavy lifting. The first is a fairly traditional setup but my concern is performance. The second is somewhat unorthodox in that the vendors are directly connected to the layer 3 switch without passing through a firewall. Based on my understanding however, a layer 3 switch will perform substantially better as it will do hardware (ASIC) vs. software switching. (Note that number 2 is a little over the budget, but not unworkable (double negative, ugh)) Since this is my first time dealing with a data center, I am not sure what the IP space is going to look like. I suspect I will retain a block(s) of public IPs, vlan them to individual interfaces for the vendor connections and the servers (which will not reachable from the wan side of course) and setup routing on the switch. So here are my questionss: Is there a substantial performance difference between 1 and 2, i.e. hardware based switching on a layer 3 vs a software base on the 2911? I have trolled the internet and found a lot of Cisco literature, but nothing that I could really use to get a good handle. The vendors we connect to are secure and trusted (famous last words) and as I understand it, it is common practice not to NAT or firewall these connections (because of the aforementioned latency sensitivity). But what what kind of latency are we really talking about if I push the data through a router (or even ASA for that matter)? For our purposes, 5 ms will not kill us, 20 or 30 can be very costly. Others measure in microseconds, but they are out of our league. Is there any issues with using public IPs on a layer 3 switch? I am certainly not married to either of these configs, and I am totally open to any ideas. My knowledge (and I use the term loosely) is largely from books so I welcome any advice / insight. Thanks in advance.

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • SharePoint - Summing Calculated Columns By Groups (DVWP)

    - by Mark Rackley
    I had a problem… okay.. okay.. so I have many problems… but let’s focus on one in particular or this blog post would never end… okay? Thank you…. So, I had an electronic timesheet where users entered hours for each day of the week. It also had a “Week Total” column which was a calculated column of the sum. The calculated column looked like this: Pretty easy.. nothing spectacular. So, what’s the problem? WELL……………….. There is a row in the timesheet for each task a person worked on in a given week. So, if you worked on 4 tasks, you would have 4 rows of data, and 4 week totals for that week: This is all fine and dandy, but I want to know what the total was for the entire week. Yes.. I realize the answer is 24 from my example… I mean, I know how to add! I just want SharePoint to display it for me for the executives (we all know, they have math problems).  You may be thinking, hey genius (in a sarcastic tone of course), why don’t you just go to the view and total on the “Week Total” field. What a brilliant idea! Why didn’t I think of that… let’s go to the view and do just that…. Ohhhhhh… you can’t total on a Calculated Column.. it’s not even an option…  Yeah… I had the same moment. So, what do you do? Well… what do you think I did? 1) Googled “SharePoint total calculated column” 2) Said it couldn’t be done 3) Took a nap 4) Asked the question on twitter? The correct answer of course is number 4… followed by number 3… although I may have told my boss number 2 so that I look more brilliant than I am? It’s safe to say I did NOT try to find the solution on my own doing step 1… that would be just WAY to easy… So, anyway, I posted the question on Twitter and it turns out several people had suggestions from using jQuery to using DVWPs. I tend to be a big fan of the DVWP except for the disgusting process of deploying them to another farm.. ugh… just shoot me…. so, that is the solution I went with. Laura Rogers (@WonderLaura) has a super duper easy to follow video on the subject over at EndUserSharePoint.com: SharePoint: Displaying Calculated Column SUMS in a View (Screencast) Laura’s video was very easy to follow and was ALMOST exactly what I needed. She does a great job walking you through every step of summing up a calculated field which was PART of my problem. The other part was my list is grouped by date! So, I wanted to see for a given week, the summed “Week Total” of hours. Laura got me on the right track with her video and I dug a little deeper into the DVWP to accomplish my task. So, here are the steps you follow: 1. Click on the "chevron” (I didn’t know it was actually called that until I heard Laura say it).. I always call it the “little-button-in-the-top-right-corner-with-the-greater-than-sign”.. but “chevron” is much shorter. So, click on the chevron, click on “Sort and Group”. The Add the field you want to group by, in my example it is the “Monday Date” of the timesheet entry. Make sure to check the check boxes for “Show Group Header” AND “Show Group Footer”. Click “OK”. The view now shows the count of each grouped set of data: Interesting, this looks very similar to Laura’s video… right? So, let’s take a look at the code for the Count: Count : <xsl:value-of select="count($nodeset)" /> Wow, also very similar… except in Laura’s video it looks like: Count : <xsl:value-of select="count($Rows)" /> So.. the only difference is that instead of $Rows we have $nodeset. It turns out the $nodeset will go through each Row in the group just like $Rows goes through each row in the entire view. So, using the exact same logic as in Laura’s blog except replacing $Rows with $nodeset we get the functionality of being able to sum up the values for a group. So, I want to replace “Count: #” with the total hours, this is done using the following changes to the above code: Week Total : <xsl:value-of select="sum($nodeset/@Monday)+sum($nodeset/@Tuesday) +sum($nodeset/@Wednesday)+sum($nodeset/@Thursday)+sum($nodeset/@Friday) +sum($nodeset/@Saturday)+sum($nodeset/@Sunday)" /> Our final output has the summed hours for each group! So… long story short… follow Laura’s blog, then group your list, then replace “$Rows” with “$nodeset”. One caveat, this will not work if you group by a person field. For some reason the person field does not go through each row in the group. I haven’t dug into this much yet. Maybe if I find some time… whatever that is… Anyway, Laura did all the work, I just took it one small step forward… as always, feel free to leave any additional insights you may have. We’re all learning here!

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • 2 Days of Share &amp; Point

    - by Mark Rackley
    Groovy man… SharePoint Saturday Ozarks is back for 2010, bigger and better than before. Join us for a far out time and learn more about SharePoint in one day than you could in a year from the man… Yes! SharePoint Saturday Ozarks is back! SharePoint Saturday Ozarks is the largest SharePoint conference in Arkansas, Southern Missouri, and the very north east tip of Oklahoma. Last year we had a great turn out with 20 speakers, 5 MVPs, and attendees coming from Arkansas, Texas, Oklahoma, Missouri, Kansas, Nebraska, Indiana, Ohio, Alabama, Michigan, and Washington. Hey Man… what’s SharePoint Saturday anyway? Sounds like a conspiracy man… Not to worry, SharePoint Saturday is not an arm of the government bent on mind control or any attempt what-so-ever to bring you down man. SharePoint Saturday is grass roots effort started by Michael Lotter (http://www.sharepointsaturday.org/pages/about.aspx). It is a FREE one day event where the best SharePoint speakers gather to present their love, hatred, and frustrations of SharePoint to those lucky individuals who attend. Lessons are learned, contacts are made, prizes are won, food is eaten, assorted beverages are consumed until wee hours of the morning. SharePoint Saturday started with just a few sporadic one day events here and there. However, over the past year SharePoint Saturday has exploded and it’s hard to find a weekend where there is NOT a SharePoint Saturday event happing in some corner of the globe. There are even occasions where there are two SharePoint Saturdays on the same day! Many people are pleasantly surprised at the caliber of speakers at these SharePoint Saturday events. For the most part, these speakers are more eloquent, practiced, and practical than those speakers you find at the major multi-day conferences. These guys aren’t even paid to speak.. they do it out of love man… SharePoint Saturday Ozarks 2009 Alumni We had a star studded cast last year with many returning this year! Just check out the fun that they had… John Ferringer – Admin rockstar… I can still sense the awesomeness   SharePoint poster children Mike Watson & Laura Rogers     Lori Gowin spreading the SharePoint Love Eric Shupps is a little bit country and a little bit rock and roll       Cathy Dew, Sean McDonough, and JD Wade relaxing between gigs Actually, you can see real photos from last year’s SharePoint Saturday ozarks here:  picasaweb.google.com/mrackley/SharePointSaturdayOzarks#    What’s new for SharePoint Saturday Ozarks 2010 SharePoint Saturday Ozarks 2010 will totally blow your mind man. We’re getting the band back to together with many returning speakers and few new faces. Joel Oleson will be speaking this year, maybe he’ll grace us with his song stylings. Sadly, once again, Andrew Connell will not be able to attend SharePoint Saturday Ozarks, however he did feel the need to show his support in his own way. Prizes this year currently include books, software, a Zune HD, and much more! Wait Man… You said 2 days? I thought it was a one day event? Correct you are my herbal smelling friend… SharePoint Saturday Ozarks 2010 will spread the love an additional day this year. The first day will be all about the SharePoint love, on day 2 we will be taking a leisurely float down the Buffalo National River for those interested in a truly unique experience (no banjos allowed please).   Here are the details: WHAT 4 – 5 hour float down the Buffalo National River WHEN & WHERE Sunday June 13th. We will be leaving at 10am from the Parking Lot of: Gordon’s Motel & Canoe Rental Old Highway 7 Jasper, AR 72641 (870) 446-5252 Jasper is about 30 minutes south of Harrison, AR on Highway 7 South. You are responsible for bumming a ride to/from Gordon’s Motel, but they will be shuttling us to/from the river and providing canoes and a boxed lunch. WHAT ELSE? The float trip is dependent on the weather of course, we won’t be floating down the river in a thunderstorm, however I planned SPS Ozarks around a time of year ideal for floating. We aren’t talking class 5 rapids here, you don’t need any real skill, but you need to be okay with possibly tipping your canoe over once or twice. You can bring your own assorted beverages with you, but glass containers are not allowed on the river. I suggest a small cooler with extra snacks and drinks. Also bring clothing you can get wet in (these SharePoint people can get ornery). HOW DO I SIGN UP? When you register for SharePoint Saturday Ozarks, you will have the option to also sign up for the float trip. Seats are limited though! If you do not intend to go, please do not take someone else’s place.  The cost for the float trip will be about $35 dollars per person (which you are responsible for unless we find a sponsor). The price includes shuttle to/from river, canoe, life jackets, paddles, and boxed lunch. Far out man… how do I register??? You can register for SharePoint Saturday Ozarks by going to http://spsozarks.eventbrite.com/ We are limited to 200 people for the conference and 50 people for the float trip, so register today before we are sold out. Lodging for SharePoint Saturday Ozarks will once again take place at the Hotel Seville: Annex Suites are available for $103.20 This is So Groovy.. How can I help? I’m glad you asked! We are still looking for a few sponsors and one or two more speakers. If you are interested please let me know!  You can find out more information at http://www.sharepointsaturday.org/ozarks Hey… wait a minute…. what exactly IS SharePoint man??? Come to SharePoint Saturday Ozarks and find out!!  See you guys there!

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ASP.NET and WIF: Showing custom profile username as User.Identity.Name

    - by DigiMortal
    I am building ASP.NET MVC application that uses external services to authenticate users. For ASP.NET users are fully authenticated when they are redirected back from external service. In system they are logically authenticated when they have created user profiles. In this posting I will show you how to force ASP.NET MVC controller actions to demand existence of custom user profiles. Using external authentication sources with AppFabric Suppose you want to be user-friendly and you don’t force users to keep in mind another username/password when they visit your site. You can accept logins from different popular sites like Windows Live, Facebook, Yahoo, Google and many more. If user has account in some of these services then he or she can use his or her account to log in to your site. If you have community site then you usually have support for user profiles too. Some of these providers give you some information about users and other don’t. So only thing in common you get from all those providers is some unique ID that identifies user in service uniquely. Image above shows you how new user joins your site. Existing users who already have profile are directed to users homepage after they are authenticated. You can read more about how to solve semi-authorized users problem from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages. The other problem is related to usernames that we don’t get from all identity providers. Why is IIdentity.Name sometimes empty? The problem is described more specifically in my blog posting Identifying AppFabric Access Control Service users uniquely. Shortly the problem is that not all providers have claim called http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name. The following diagram illustrates what happens when user got token from AppFabric ACS and was redirected to your site. Now, when user was authenticated using Windows Live ID then we don’t have name claim in token and that’s why User.Identity.Name is empty. Okay, we can force nameidentifier to be used as name (we can do it in web.config file) but we have user profiles and we want username from profile to be shown when username is asked. Modifying name claim Now let’s force IClaimsIdentity to use username from our user profiles. You can read more about my profiles topic from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages and you can find some useful extension methods for claims identity from my blog posting Identifying AppFabric Access Control Service users uniquely. Here is what we do to set User.Identity.Name: we will check if user has profile, if user has profile we will check if User.Identity.Name matches the name given by profile, if names does not match then probably identity provider returned some name for user, we will remove name claim and recreate it with correct username, we will add new name claim to claims collection. All this stuff happens in Application_AuthorizeRequest event of our web application. The code is here. protected void Application_AuthorizeRequest() {     if (string.IsNullOrEmpty(User.Identity.Name))     {         var identity = User.Identity;         var profile = identity.GetProfile();         if (profile != null)         {             if (profile.UserName != identity.Name)             {                 identity.RemoveName();                   var claim = new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", profile.UserName);                 var claimsIdentity = (IClaimsIdentity)identity;                 claimsIdentity.Claims.Add(claim);             }         }     } } RemoveName extension method is simple – it looks for name claims of IClaimsIdentity claims collection and removes them. public static void RemoveName(this IIdentity identity) {     if (identity == null)         return;       var claimsIndentity = identity as ClaimsIdentity;     if (claimsIndentity == null)         return;       for (var i = claimsIndentity.Claims.Count - 1; i >= 0; i--)     {         var claim = claimsIndentity.Claims[i];         if (claim.ClaimType == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name")             claimsIndentity.Claims.RemoveAt(i);     } } And we are done. Now User.Identity.Name returns the username from user profile and you can use it to show username of current user everywhere in your site. Conclusion Mixing AppFabric Access Control Service and Windows Identity Foundation with custom authorization logic is not impossible but a little bit tricky. This posting finishes my little series about AppFabric ACS and WIF for this time and hopefully you found some useful tricks, tips, hacks and code pieces you can use in your own applications.

    Read the article

  • C# in Depth, Third Edition by Jon Skeet, Manning Publications Co. Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/10/24/c-in-depth-third-edition-by-jon-skeet-manning-publications.aspx I started reading this ebook on September 28, 2013, the same day it was sent my way by Manning Publications Co. for review while it still being fresh off the press. So 1st thing – thanks to Manning for this opportunity and a free copy of this must have on every C# developer’s desk book! Several hours ago I finished reading this book (well, except a for a large portion of its quite lengthy appendix). I jumped writing this review right away while still being full of emotions and impressions from reading it thoroughly and running code examples. Before I go any further I would like say that I used to program on various platforms using various languages starting with the Mainframe and ending on Windows, and I gradually shifted toward dealing with databases more than anything, however it happened with me to program in C# 1 a lot when it was first released and then some C# 2 with a big leap in between to C# 5. So my perception and experience reading this book may differ from yours. Also what I want to tell is somewhat funny that back then, knowing some Java and seeing C# 1 released, initially made me drawing a parallel that it is a copycat language, how wrong was I… Interestingly, Jon programs in Java full time, but how little it was mentioned in the book! So more on the book: Be informed, this is not a typical “Recipes”, “Cookbook” or any set of ready solutions, it is rather targeting mature, advanced developers who do not only know how to use a number of features, but are willing to understand how the language is operating “under the hood”. I must state immediately, at the same time I am glad the author did not go into the murky depths of the MSIL, so this is a very welcome decision on covering a modern language as C# for me, thank you Jon! Frankly, not all was that rosy regarding the tone and structure of the book, especially the the first half or so filled me with several negative and positive emotions overpowering each other. To expand more on that, some statements in the book appeared to be bias to me, or filled with pre-justice, it started to look like it had some PR-sole in it, but thankfully this was all gone toward the end of the 1st third of the book. Specifically, the mention on the C# language popularity, Java is the #1 language as per https://sites.google.com/site/pydatalog/pypl/PyPL-PopularitY-of-Programming-Language (many other sources put C at the top which I highly doubt), also many interesting functional languages as Clojure and Groovy appeared and gained huge traction which run on top of Java/JVM whereas C# does not enjoy such a situation. If we want to discuss the popularity in general and say how fast a developer can find a new job that pays well it would be indeed the very Java, C++ or PHP, never C#. Or that phrase on language preference as a personal issue? We choose where to work or we are chosen because of a technology used at a given software shop, not vice versa. The book though it technically very accurate with valid code, concise examples, but I wish the author would give more concrete, real-life examples on where each feature should be used, not how. Another point to realize before you get the book is that it is almost a live book which started to be written when even C# 3 wasn’t around so a lot of ground is covered (nearly half of the book) on the pre-C# 3 feature releases so if you already have a solid background in the previous releases and do not plan to upgrade, perhaps half of the book can be skipped, otherwise this book is surely highly recommended. Alas, for me it was a hard read, most of it. It was not boring (well, only may be two times), it was just hard to grasp some concepts, but do not get me wrong, it did made me pause, on several occasions, and made me read and re-read a page or two. At times I even wondered if I have any IQ at all (LOL). Be prepared to read A LOT on generics, not that they are widely used in the field (I happen to work as a consultant and went thru a lot of code at many places) I can tell my impression is the developers today in best case program using examples found at OpenStack.com. Also unlike the Java world where having the most recent version is nearly mandated by the OSS most companies on the Microsoft platform almost never tempted to upgrade the .Net version very soon and very often. As a side note, I was glad to see code recently that included a nullable variable (myvariable? notation) and this made me smile, besides, I recommended that person this book to expand her knowledge. The good things about this book is that Jon maintains an active forum, prepared code snippets and even a small program (Snippy) that is happy to run the sample code saving you from writing any plumbing code. A tad now on the C# language itself – it sure enjoyed a wonderful road toward perfection and a very high adoption, especially for ASP development. But to me all the recent features that made this statically typed language more dynamic look strange. Don’t we have F#? Which supposed to be the dynamic language? Why do we need to have a hybrid language? Now the developers live their lives in dualism of the static and dynamic variables! And LINQ to SQL, it is covered in depth, but wasn’t it supposed to be dropped? Also it seems that very little is being added, and at a slower pace, e.g. Roslyn will come in late 2014 perhaps, and will be probably the only main feature. Again, it is quite hard to read this book as various chapters, C# versions mentioned every so often only if I only could remember what was covered exactly where! So the fact it has so many jumps/links back and forth I recommend the ebook format to make the navigations easier to perform and I do recommend using software that allows bookmarking, also make sure you have access to plenty of coffee and pizza (hey, you probably know this joke – who a programmer is) ! In terms of closing, if you stuck at C# 1 or 2 level, it is time to embrace the power of C# 5! Finally, to compliment Manning, this book unlike from any other publisher so far, was the only one as well readable (put it formatted) on my tablet as in Adobe Reader on a laptop.

    Read the article

  • New <%: %> Syntax for HTML Encoding Output in ASP.NET 4 (and ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the nineteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers a small, but very useful, new syntax feature being introduced with ASP.NET 4 – which is the ability to automatically HTML encode output within code nuggets.  This helps protect your applications and sites against cross-site script injection (XSS) and HTML injection attacks, and enables you to do so using a nice concise syntax. HTML Encoding Cross-site script injection (XSS) and HTML encoding attacks are two of the most common security issues that plague web-sites and applications.  They occur when hackers find a way to inject client-side script or HTML markup into web-pages that are then viewed by other visitors to a site.  This can be used to both vandalize a site, as well as enable hackers to run client-script code that steals cookie data and/or exploits a user’s identity on a site to do bad things. One way to help mitigate against cross-site scripting attacks is to make sure that rendered output is HTML encoded within a page.  This helps ensures that any content that might have been input/modified by an end-user cannot be output back onto a page containing tags like <script> or <img> elements.  ASP.NET applications (especially those using ASP.NET MVC) often rely on using <%= %> code-nugget expressions to render output.  Developers today often use the Server.HtmlEncode() or HttpUtility.Encode() helper methods within these expressions to HTML encode the output before it is rendered.  This can be done using code like below: While this works fine, there are two downsides of it: It is a little verbose Developers often forget to call the HtmlEncode method New <%: %> Code Nugget Syntax With ASP.NET 4 we are introducing a new code expression syntax (<%:  %>) that renders output like <%= %> blocks do – but which also automatically HTML encodes it before doing so.  This eliminates the need to explicitly HTML encode content like we did in the example above.  Instead you can just write the more concise code below to accomplish the same thing: We chose the <%: %> syntax so that it would be easy to quickly replace existing instances of <%= %> code blocks.  It also enables you to easily search your code-base for <%= %> elements to find and verify any cases where you are not using HTML encoding within your application to ensure that you have the correct behavior. Avoiding Double Encoding While HTML encoding content is often a good best practice, there are times when the content you are outputting is meant to be HTML or is already encoded – in which case you don’t want to HTML encode it again.  ASP.NET 4 introduces a new IHtmlString interface (along with a concrete implementation: HtmlString) that you can implement on types to indicate that its value is already properly encoded (or otherwise examined) for displaying as HTML, and that therefore the value should not be HTML-encoded again.  The <%: %> code-nugget syntax checks for the presence of the IHtmlString interface and will not HTML encode the output of the code expression if its value implements this interface.  This allows developers to avoid having to decide on a per-case basis whether to use <%= %> or <%: %> code-nuggets.  Instead you can always use <%: %> code nuggets, and then have any properties or data-types that are already HTML encoded implement the IHtmlString interface. Using ASP.NET MVC HTML Helper Methods with <%: %> For a practical example of where this HTML encoding escape mechanism is useful, consider scenarios where you use HTML helper methods with ASP.NET MVC.  These helper methods typically return HTML.  For example: the Html.TextBox() helper method returns markup like <input type=”text”/>.  With ASP.NET MVC 2 these helper methods now by default return HtmlString types – which indicates that the returned string content is safe for rendering and should not be encoded by <%: %> nuggets.  This allows you to use these methods within both <%= %> code nugget blocks: As well as within <%: %> code nugget blocks: In both cases above the HTML content returned from the helper method will be rendered to the client as HTML – and the <%: %> code nugget will avoid double-encoding it. This enables you to default to always using <%: %> code nuggets instead of <%= %> code blocks within your applications.  If you want to be really hardcore you can even create a build rule that searches your application looking for <%= %> usages and flags any cases it finds as an error to enforce that HTML encoding always takes place. Scaffolding ASP.NET MVC 2 Views When you use VS 2010 (or the free Visual Web Developer 2010 Express) you’ll find that the views that are scaffolded using the “Add View” dialog now by default always use <%: %> blocks when outputting any content.  For example, below I’ve scaffolded a simple “Edit” view for an article object.  Note the three usages of <%: %> code nuggets for the label, textbox, and validation message (all output with HTML helper methods): Summary The new <%: %> syntax provides a concise way to automatically HTML encode content and then render it as output.  It allows you to make your code a little less verbose, and to easily check/verify that you are always HTML encoding content throughout your site.  This can help protect your applications against cross-site script injection (XSS) and HTML injection attacks.  Hope this helps, Scott

    Read the article

  • Transparency and AlphaBlending

    - by TechTwaddle
    In this post we'll look at the AlphaBlend() api and how it can be used for semi-transparent blitting. AlphaBlend() takes a source device context and a destination device context (DC) and combines the bits in such a way that it gives a transparent effect. Follow the links for the msdn documentation. So lets take a image like, and AlphaBlend() it on our window. The code to do so is below, (under the WM_PAINT message of WndProc) HBITMAP hBitmap=NULL, hBitmapOld=NULL; HDC hMemDC=NULL; BLENDFUNCTION bf; hdc = BeginPaint(hWnd, &ps); hMemDC = CreateCompatibleDC(hdc); hBitmap = LoadBitmap(g_hInst, MAKEINTRESOURCE(IDB_BITMAP1)); hBitmapOld = SelectObject(hMemDC, hBitmap); bf.BlendOp = AC_SRC_OVER; bf.BlendFlags = 0; bf.SourceConstantAlpha = 80; //transparency value between 0-255 bf.AlphaFormat = 0;    AlphaBlend(hdc, 0, 25, 240, 100, hMemDC, 0, 0, 240, 100, bf); SelectObject(hMemDC, hBitmapOld); DeleteDC(hMemDC); DeleteObject(hBitmap); EndPaint(hWnd, &ps);   The code above creates a memory DC (hMemDC) using CreateCompatibleDC(), loads a bitmap onto the memory DC and AlphaBlends it on the device DC (hdc), with a transparency value of 80. The result is: Pretty simple till now. Now lets try to do something a little more exciting. Lets get two images involved, each overlapping the other, giving a better demonstration of transparency. I am also going to add a few buttons so that the user can increase or decrease the transparency by clicking on the buttons. Since this is the first time I played around with GDI apis, I ran into something that everybody runs into sometime or the other, flickering. When clicking the buttons the images would flicker a lot, I figured out why and used something called double buffering to avoid flickering. We will look at both my first implementation and the second implementation just to give the concept a little more depth and perspective. A few pre-conditions before I dive into the code: - hBitmap and hBitmap2 are handles to the two images obtained using LoadBitmap(), these variables are global and are initialized under WM_CREATE - The two buttons in the application are labeled Opaque++ (make more opaque, less transparent) and Opaque-- (make less opaque, more transparent) - DrawPics(HWND hWnd, int step=0); is the function called to draw the images on the screen. This is called from under WM_PAINT and also when the buttons are clicked. When Opaque++ is clicked the 'step' value passed to DrawPics() is +20 and when Opaque-- is clicked the 'step' value is -20. The default value of 'step' is 0 Now lets take a look at my first implementation: //this funciton causes flicker, cos it draws directly to screen several times void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL;     BLENDFUNCTION bf;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     //create a memory DC     hMemDC = CreateCompatibleDC(hdc);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents of screen     if (step < 0)     {         RECT rect = {0, 0, 240, 200};         FillRect(hdc, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     BitBlt(hdc, 0, 25, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hdc, 0, 75, 240, 100, hMemDC, 0, 0, 240, 100, bf);     DeleteDC(hMemDC);     ReleaseDC(hWnd, hdc); }   In the code above, we first get the window DC using GetDC() and create a memory DC using CreateCompatibleDC(). Then we select hBitmap2 onto the memory DC and Blt it on the window DC (hdc). Next, we select the other image, hBitmap, onto memory DC and AlphaBlend() it over window DC. As I told you before, this implementation causes flickering because it draws directly on the screen (hdc) several times. The video below shows what happens when the buttons were clicked rapidly: Well, the video recording tool I use captures only 15 frames per second and so the flickering is not visible in the video. So you're gonna have to trust me on this, it flickers (; To solve this problem we make sure that the drawing to the screen happens only once and to do that we create an additional memory DC, hTempDC. We perform all our drawing on this memory DC and finally when it is ready we Blt hTempDC on hdc, and the images are displayed in one go. Here is the code for our new DrawPics() function: //no flicker void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL, hTempDC=NULL;     BLENDFUNCTION bf;     HBITMAP hBitmapTemp=NULL, hBitmapOld=NULL;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     hMemDC = CreateCompatibleDC(hdc);     hTempDC = CreateCompatibleDC(hdc);     hBitmapTemp = CreateCompatibleBitmap(hdc, 240, 150);     hBitmapOld = (HBITMAP)SelectObject(hTempDC, hBitmapTemp);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents     if (step < 0)     {         RECT rect = {0, 0, 240, 150};         FillRect(hTempDC, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     //Blt hBitmap2 directly to hTempDC     BitBlt(hTempDC, 0, 0, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hTempDC, 0, 50, 240, 100, hMemDC, 0, 0, 240, 100, bf);     //now hTempDC is ready, blt it directly on hdc     BitBlt(hdc, 0, 25, 240, 150, hTempDC, 0, 0, SRCCOPY);     SelectObject(hTempDC, hBitmapOld);     DeleteObject(hBitmapTemp);     DeleteDC(hMemDC);     DeleteDC(hTempDC);     ReleaseDC(hWnd, hdc); }   This function is very similar to the first version, except for the use of hTempDC. Another point to note is the use of CreateCompatibleBitmap(). When a memory device context is created using CreateCompatibleDC(), the context is exactly one monochrome pixel high and one monochrome pixel wide. So in order for us to draw anything onto hTempDC, we first have to set a bitmap on it. We use CreateCompatibleBitmap() to create a bitmap of required dimension (240x150 above), and then select this bitmap onto hTempDC. Think of it as utilizing an extra canvas, drawing everything on the canvas and finally transferring the contents to the display in one scoop. And with this version the flickering is gone, video follows:   If you want the entire solutions source code then leave a message, I will share the code over SkyDrive.

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >