Search Results

Search found 1076 results on 44 pages for 'simplify'.

Page 26/44 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Is it dangerous for me to give some of my Model classes Control-like methods?

    - by Pureferret
    In my personal project I have tried to stick to MVC, but I've also been made aware that sticking to MVC too tightly can be a bad thing as it makes writing awkward and forces the flow of the program in odd ways (i.e. some simple functions can be performed by something that normally wouldn't, and avoid MVC related overheads). So I'm beginning to feel justified in this compromise: I have some 'manager programs' that 'own' data and have some way to manipulate it, as such I think they'd count as both part of the model, and part of the control, and to me this feels more natural than keepingthem separate. For instance: One of my Managers is the PlayerCharacterManager that has these methods: void buySkill(PlayerCharacter playerCharacter, Skill skill); void changeName(); void changeRole(); void restatCharacter(); void addCharacterToGame(); void createNewCharacter(); PlayerCharacter getPlayerCharacter(); List<PlayerCharacter> getPlayersCharacter(Player player); List<PlayerCharacter> getAllCharacters(); I hope the mothod names are transparent enough that they don't all need explaining. I've called it a manager because it will help manage all of the PlayerCharacter 'model' objects the code creates, and create and keep a map of these. I may also get it to store other information in the future. I plan to have another two similar classes for this sort of control, but I will orchestrate when and how this happens, and what to do with the returned data via a pure controller class. This splitting up control between informed managers and the controller, as opposed to operating just through a controller seems like it will simplify my code and make it flow more. My question is, is this a dangerous choice, in terms of making the code harder to follow/test/fix? Is this somethign established as good or bad or neutral? I oculdn't find anything similar except the idea of Actors but that's not quite why I'm trying to do. Edit: Perhaps an example is needed; I'm using the Controller to update the view and access the data, so when I click the 'Add new character to a player button' it'll call methods in the controller that then go and tell the PlayerCharacterManager class to create a new character instance, it'll call the PlayerManager class to add that new character to the player-character map, and then it'll add this information to the database, and tell the view to update any GUIs effected. That is the sort of 'control sequence' I'm hoping to create with these manager classes.

    Read the article

  • Help understanding my hard drive / partitioning situation... Pictures Included! :)

    - by xopenex
    So I have installed windows 7, and two different distros of linux... I have read and tried to understand things like "spanned" "extended" "primary" "swap" "dev/dev2/" "GRUB" "Windows Boot Loader/Manager" etc.... I have a very very limited understanding of all of it! :) I am trying to figure out how to get all OS boot options on one Boot manager (I'm thinking it will be GRUB), because at this point when i turn on my computer, I basically get two booting options (excluding the memtest options etc)... One options is to boot one of my Linux Distros and the second option is to boot my Windows 7. When i go with the first option, Linux boots up... when i go with the second Windows 7 option, I get the "windows boot manager screen" and I can choose Windows 7 or my other installation of Linux (Ubuntu)... In addition, I did not have swap partition from my first installation of Linux, I created it during the installation of my second distro... This is a lot of info for me, but I'm guessing that you linux Gurus, pretty much understand what is going on! Hope my question makes sense.. i will try and simplify... Can i get all 3 OS's optioned to boot from one GRUB? Can i get both Linux distros to use one swap file (I have seen this possible in other threads, but because of how my disk is partitioned, i dont know if i can do this) I hope that i dont have to start all over installing one after the other. Ive got some pics that may help understand my hard drive situation! Thanks guys! :) EDIT... i had some pics, but im a new member.. so cant post them... :( here is a description of the pics... incase i can email them or post later. [grub][3] First Screen I come to after turning on computer... "Ubuntu with linux 3.2.6" (highlighted) fires up Linux perfectly... other choice at bottom of list "Windows 7 (loader) (on dev/sda1)... brings me to the next picture below.. windows boot manager [win boot mngr][6] both options here load the os selected [Disk Manager Windows][1] picture of my hard drive situation through windows disk manager utility [gparted][2] picture of my hard drive situation through "gparted" [mycomp][4] picture of my hard drive situation through "my computer" [paragon][5] one last pic of my hard drive situation through the eyes of "paragon"

    Read the article

  • Nginx case-insensitive reverse proxy rewrites

    - by BrianM
    I'm looking to setup an nginx reverse proxy to make some upcoming server moves and load balanced implementations much easier within our apps. Since our servers are all IIS case sensitivity hasn't been an issue, but now with nginx it's becoming one for me. I am simply looking to do a rewrite regardless of case. Infrastructure notes: All backend servers are IIS Most services are WCF services I am trying to simplify the URLs so I can move services around as we continue to build out I can't set my location to case insensitive due to the following error: nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/sites-enabled/test.conf:101 The main part of my conf file where I am trying to handle the rewrite is as follows location /svc_test { proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; proxy_set_header host $http_host; proxy_pass http://backend/serviceSite/WFCService.svc; } location ~* /test { rewrite ^/(.*)/$ /svc_test/$1 last; } It's the /test location that I can't get figured out. If I call http://nginxserver/svc_test/help I get the WCF help page to display correctly and I can make all available REST calls. This HAS to be a boneheaded regex issue on my part, but I have tried several variations and all I can get are 404 or 500 errors from nginx. This is NOT rocket science so can someone point me in the right direction so I can look like an idiot and just move on?

    Read the article

  • DCOM configuration: accounts with same name but different passwords problem

    - by archimed7592
    Hello, everybody! I'm experiencing troubles with DCOM configuration. Here is the case: I'm using some product which supports client-server interaction through DCOM, but the client won't get any access to the server if the attempt is being done from an account with a name which exists at the server as well, but has different password. Basically, if we try to access the server from the Administrator account which obviously present on the server machine, we will fail if client's Administrator password doesn't match server's one. After actively collaborating with the product's developer in attempts to localize the issue, he come across with resolution "can't be fixed" or, if you prefer to call a pikestaff a pikestaff than it's more likely a "don't know how to fix" resolution :). I believe there is a solution for this problem and I'm asking you, IT professionals, to help me out with this one. I do realize that the problem may be caused by the way the developer interact with DCOM and if so it can't be fixed be means of pure system configuration and the question should be asked at SO, but since I've bumped into the same behavior while working with file/printer sharing - Windows tried to simplify everything and used currently impersonated credentials to access the share, I hope the solution lies at system configuration layer. P.S. I believe that the actual software product I'm talking about is entirely irrelevant however my experience tell me that there always would be somebody who will think that it on the contrary is very relevant. Here it is: SpRecord.

    Read the article

  • Setup site folders on Apache and PHP

    - by Cobus Kruger
    I'm trying to set up my first Apache server on my Windows PC at home and I have real trouble finding out which configuration settings go where. I downloaded and installed XAMPP which seemed to get everything nicely set up and can see a working website on http://localhost. So far so good. The point of this is to develop a website of course, and to make my life easier (irony?), I wanted to let the web site root point to my Eclipse project folder. So I opened httpd-vhosts.conf, uncommented a VirtualHost block and changed its DocumentRoot to my local path. Now when I try to load http://localhost I get a 403 (Access denied) error. So where do I configure permissions for my folder? And is that all I need to let my site run from the folder specified or am I going to have to clear another hurdle? Update: I tried to simplify things a little, so I reinstalled XAMPP and got back to a working http://localhost. Then I confirmed that httpd-vhosts.conf is included in httpd.conf and made the following changes to httpd-vhosts.conf: Uncommented the line NameVirtualHost *:80 Added a virtual host shown below. Restarted Apache and saw the expected page on http://localhost <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/" ServerName localhost ErrorLog "logs/dummy-host2.localhost-error.log" CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> I then created a new folder named C:\testweb, added an index.html file and changed the DocumentRoot line shown above. For all intents and purposes I would then expect the two configurations to be equivalent. But this setup gives me an error 403. Even though the C:\testweb folder already had the same permissions as the C:\xampp\htdocs folder, I then went further and gave the Everyone group full control of C:\testweb and got exactly the same problem. So what did I miss?

    Read the article

  • New keyboard for linux: Adesso Tru-Form or MS Natural Keyboard 4000?

    - by Andrea
    Hi folks! I'm going to buy a new ergonomic keyboard for my laptop. In the following, keep in mind I live in Italy. I considered the following models: Adesso PCK-308UB - Adesso Tru-Form™ Pro - Contoured Ergonomic Keyboard with TouchPad-PS2 Pro: has a built-in touchpad in the same position of my laptop somewhat cheaper than the alternative below Cons: the surface doesn't seem to be bowl-shaped. keys seem to lay on a straight slightly-inclined surface. It seems an idea used extensively in other ergonomic keyboards according to a few comments on the net, new Adesso keyboards seem to lack robustness, they're likely to loose small parts after a few weeks or months. Other users, instead, seem to never had any problem in years and swear by their quality and comfortability. Those who had problems, however, lamented a lack of responsiveness from the manufacturer. I'm not sure whether the keyboard, at least the standard keys, and the touchpad will both be recognized correctly under linux distros (I mostly use FC btw) last time I checked, Adesso didn't have local resellers in my country Microsoft Natural Ergonomic Keyboard Pro: recognized as one of the most comfortable keyboards reliable customer service operating in my country AFAIK there are several documented ways to get extra buttons work with linux Cons: it doesn't have a builtin touchpad and has a numeric keypad wasting space to reach mouse But there could be other keyboards I haven't considered yet, so here follows my ideal keyboard wishlist, ordered by priority linux compatible basic ergonomic design, which entails split tilted keyboard and pads advanced ergonomic design, like true-ergonomic's or kinesis , where special keys (like enter, caps-lock...) are placed symmetrically in the middle to be used by thumbs a builtin touchpad/trackball placed under the keyboard. I just love this on my notebook. I think it's pretty effective, since it allows my hand to rest naturally everytime I use it. Any opinion on this? high-quality switches, like cherry's (unsure about this one) additional programmable keys placed near usual ones, to simplify typing shortcuts TIA Andrea

    Read the article

  • Creating a ssh tunnel to transfer files?

    - by Vincent
    For me, networks are a very "opaque" thing, and even with reading a lot of tutorial about SSH, I do not understand how to create a basic tunnel to transfer my files. The configuration is the following : My Computer --[Internet]--> Bridge Machine --[Local Network]--> Final Machine Currently I do the following : 1) Connect to the Bridge Machine with : ssh -X [email protected] 2) Connect to the Final Machine with : ssh -X username@finalmachine 3) I copy the address of files I need (for example .../mydirectory) 4) Then I deconnect from the finalmachine with : exit 5) I copy the files to the bridge : scp -r username@finalmachine:/.../mydirectory . 6) I deconnect from the bridge with : exit 7) I copy the files to my machine : scp -r [email protected]:/.../mydirectory . Which is quite complicated. My question is basic : how to simplify this using a SSH tunnel ? (and please explain me the signification of each command line you write, to understand what each line really do and to avoid to use it like a magical thing. Furthermore if some ports number are used, explain me if I can pick a completely random number or if I have to choose a specific one.)

    Read the article

  • Datacenter IP Addressing and DNS Management

    - by user65248
    Hello everyone Basically we are setting up a small Datacenter, about 300 amps power and max 50 racks, Im saying these coz I wanna u imagine the size and requirements, I have studied networking mostly Microsoft and Windows based systems , but I cant get how the IP addressing and DNS management and configuration works in a Datacenter , and unfortunately I have to setup everything by myself but defe we will have some staff to do some job. Now my questions Datacenter IP Addressing Suppose we have got a block of 200 IP addresses from our ISP, How can I manage these block of IP addresses, is there any software out there to simplify this I heard that using DHCP server in a datacenter is not recommended, otherwise what would u say about MS DHCL server ofc considering we need to have backup serversin case of failur How can I assign a block of IPs to a specific rack, I know with different software and management its different but Im asking how it is done normally IP addresses are exposed to the whole network, what if a customer try to use an IP address and is not assigned to their server or rack , how can I prevent this or how can I track the IP usage DNS Management Im goin to setup at least two servers for our DNS servers, I know nothing about Datacenter DNS system, but I have configured DNS server in normal networks and also for webservers, Now I wanna know What exactly needs to be done for a DNS in a datacenter that is not done for normal networks. How can I configure PTR records why cant I configure PTR records on my webserver side DNS server and it should be done on datacenter DNS server , I mean what is the difference in DC DNS servers that allow us to to so , I know the question is very silly and simple but Im confused Is there any software outthere to allow doing the whole thing, I mean automatically add records to the DNS and also managin IP addresses !? Thanks in advance

    Read the article

  • Map FTP folder to folder on different FTP server

    - by jolt
    In my team we work a lot with FTP. We upload and download files from several different servers daily. Currently every member of the team manages access credentials to each FTP server locally on their own machine. I am looking for a way to set up a central FTP server that we can connect to, and from there, navigate to folders that each represent one of the other FTP servers that we connect to daily. Something like this: In-house central FTP server: |- FolderA --> server A root folder |- FolderB --> server B root folder |- FolderC --> server C root folder A setup like this, would mean that we can manage access credentials on the central FTP server, and team members would only need to have the access credentials to the central FTP server, and from there they could navigate to the other servers through these "virtual" folders. We could potentially develop our own custom FTP server that just forward requests to the remote FTP servers, but i feel like something like this (or something similar) would already have been done. So I'm looking for pointers that could help us find software for Windows that could help us to simplify our current setup. Thank you! Similar (unanswered) question here: FTP management server

    Read the article

  • What is the fastest and best way to convert an rmvb video to mp4/mkv without losing any quality?

    - by Eric Leung
    the file will be played in a popbox3d. my old method was to convert the video using vidcoder (an offshoot of handbrake) using normal settings, but i've recently confirmed that this significantly reduces video and audio quality. i bumped up the conversion quality to 'high profile' and this produced a higher quality video but raised the conversion time to about twice the video length (95 minutes to convert a 45 minute video) on a core2duo laptop. this is less than ideal when a large number of videos need to be converted. i have tried a direct remuxing using mkv toolnix but this produced a video that refused to display video on the popbox3d, which is consistent with the reported: [quote=other old thread] it is possible to put RealMedia A/V in MKV container (used MKVtoolnix) - however, it is awkward to play later. RV40 is only suspected to be based on H.264 - simplify, is not consistent with MPEG-4 AVC specification. [/quote] i have read that ... [quote=from old thread] Under normal circumstances, [ffmpeg] should convert the video to .video.mp4 and the audio to (.wav then to) .audio.mp4, then mux the video and audio into a new .mp4 file and delete the temporary video-only and audio-only files.[/quote] and i am currently attempting to discover how this is done. help? PS: i download a lot of series from asia and for some strange reason, rmvb is a really popular format over there. sometimes, it's the only format that's available. unfortunately, it's a format that is incompatible with the popbox3d, so i have to convert the files before i can watch them on my tv.

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • KeePass lost password and/or corruption due to Dropbox/KeePassX

    - by GummiV
    I started using Keepass about a month ago to hold my passwords and online accounts info. Everything was stored in a single .kdb file, only protected with a password. I'm using Windows 7. Now Keepass can't open my .kdb file with the error "Invalid/wrong key". I'm fairly confident I have the right password. Altough I might have mixed up a few letters I've tried about two dozen different combinations to minimize that possibility - but can't rule it out though. My guess is however that the .kdb file got corrupted, either due to Dropbox syncing (only using it on one computer though) or because I edited the file using KeePassX on Ubuntu (dual boot on the same computer, accessing a mounted Win7 NTFS partition), or possibly a combination of both. I have tried restoring older versions(even the original one) from Dropbox and trying out all possible passwords without any luck. (which does seem to rule out KeePassX as the culprit, since oldest copies are before I edited the file from Ubuntu) I have tried opening the file with the "Repair KeePass Database file" which always gives the "0xA Invalid/corrupt file structure" (the same error for when a wrong password is typed). I was wondering if there was any way for me to salvage my hard-gathered data. I know generally that brute force cracking is not feasible, but since I can remember probably more than half of the usernames/passwords, any maybe the fact that one of them does come up fairly often (my go-to pass for trivial stuff), that might simplify the brute force process to a doable time frame. Maybe the brute-force thing might incorporate the fact that I know the password length and what characters it's made from. (If we assume corruption, not a password-blackout on my part) I could do some programming if there are any libraries or routines that I could use. Other people seem to have had a similar probem http://forums.dropbox.com/topic.php?id=6199 http://forums.dropbox.com/topic.php?id=9139 http://www.keepassx.org/forum/viewtopic.php?t=1967&f=1 So hopefully this question will become a suitible resource for people when searching the web. Feel free to tell me if you think this should rather be a community wiki.

    Read the article

  • How can I redirect/forward all the UDP/TCP traffic on one interface to another interface in OpenWrt

    - by Sina Sou
    I am new to networking and I have a measurement device (D) that periodically sends all its readings over few UDP multicast sockets (with different multicast IP addresses and different port numbers). That device even listens to a TCP socket simultaneously to modify its configuration on port 7234. Since the device has just a Ethernet interface for communication and I want to make it work wireless, I decided to use a very small wireless open-wrt based router that attaches to the device (D) and redirect/forward all the network traffic(Both UDP/TCP) to the router wireless interface. In order to simplify the problem assume that the Device (D) establishes following sockets (at the same time) UM_SOCK1: UDP mcast socket on 239.1.2.3 port# 50620 UM_SOCK2: UDP mcast socket on 239.1.2.4 port# 50640 TC_SOCK3: TCP DHCP/STATIC ip address 192.168.1.200 port 7234 And (D) is connected to Open-Wrt router (R) via interface en01 (Ethernet) the router has it own wireless interface on (wlan0) I want all the traffic from interface pass through wlan01 and vice versa (bi-directional) en01 <---- wlan01 What would be the minimum iptables or ... commands that I need to make this possible? Even I am wondering if traffic directing can be made easier like if the direction is not going to be based on IP addresses(not desired if the device is connected via DHCP) I would rather redirection to be Interface(en0) based or on MAC address (The best solution since my device has unique MAC address)? Thanks

    Read the article

  • How can I configure Samba to share (read/write) any folder with root permissions?

    - by Mike Toews
    I have a CentOS 5 VirtualBox guest on a Win7x64 host. I am attempting to setup a read/write share a directory owned by root with my Windows host using Samba, but I'm having no luck after running around in circles. To simplify matters, I've disabled my Firewall (/etc/init.d/iptables stop). As security and permissions are irrelevant for this purpose, I'd rather not have to set up another unix user/group/password. Here is the output from testparm Load smb config files from /etc/samba/smb.conf rlimit_max: rlimit_max (1024) below minimum Windows limit (16384) Processing section "[Guest Share]" Loaded services file OK. Server role: ROLE_STANDALONE and the source of /etc/samba/smb.conf: [global] workgroup = WRKGRP netbios name = SMBSERVER security = SHARE load printers = No [Guest Share] comment = Guest access share path = /root/src read only = No guest ok = Yes Running /etc/init.d/smb restart shows an OK status. However, on my Windows host, I can only see the share folder on the guest \\IPv4, but I cannot go into "Guest Share": "The network name cannot be found" error message is a common error, with a likely cause: The user you are trying to access the share with does not have sufficient permissions to access the path for the share. Both read (r) and access (x) should be possible. Am I trying to use root as a passwordless Samba guest? I'd like to, is it possible? How can I configure Samba to share (read/write) any folder with root permissions?

    Read the article

  • Recommendation for a simple no-frills Windows PDF printer driver?

    - by Scott Bussinger
    I'm looking for an extremely simple Windows PDF printer driver that I can recommend to clients. Ideally it would have these characteristics: When you print something, it should just create it as a temporary file and then display it in their default PDF viewer with no prompting. If they want to save it, they can save it manually from inside the viewer. This workflow should be with no special post-install configuration. Installation should be very simple. A double click the installation program and click "Finish" sort of thing. No complicated multi-step installation, no asking questions your grandmother wouldn't know the answer to (preferably no questions at all), no extra crap being installed. An option for a completely silent installation would be nice, but not necessary. Ideally it would be free to simplify their installing on a small network, but low cost is an option. I've tried a quite a few but none really fit the bill. Some can achieve the first goal but only after careful configuration, some try to install extra toolbars, some have other installation complexities that would make it hard for extremely novice users to succeed. Any suggestions? Thanks!

    Read the article

  • iptables rule on INPUT between 2 ethernet cards on the same host

    - by user1495181
    I have 2 eth cards on the same host. Both connected directly with LAN cable. I set eth0 with ip - 192.168.1.2 I set eth1 with ip - 192.168.1.1 I set this rule: iptables -A INPUT -p tcp -j NFQUEUE --queue-num 0 There are no other rules. (I ran iptables -X,-F) I send TCP syn packet ( with c++ program by using raw socket) from 192.168.1.2 to 192.168.1.1 In wireshark i see that the packet received on eth0, but the iptables rule (above) dosnt apply for this packet. when i sent the packet to remote host and apply this rule on the remote host than it work correct. So, i guess that this is due to the fact that both eth cards exists the same host. . I need to create iptables INPUT rule for local eth card (dest and src on the same machine ). I need it for simplify test. Did i guess the problem correct? is there a way to bypass this? Ps - connected them via switch didn't help. the rule wasn't applied. Run on Ubuntu. TCDUMP show the packet: 10:48:42.365002 IP 192.168.1.2.38550 > 192.168.1.1.34298: Flags [S], seq 0, win 5840, length 0 but logging of iptables like this, has nothing: iptables -A INPUT -p tcp -j LOG --log-prefix '*****************' iptables -A OUTPUT -p tcp -j LOG --log-prefix '#################'

    Read the article

  • What happened to 'Copy Public Gallery Link' for folders in Photos in Dropbox?

    - by Transgenic
    I've been trying to figure out how to use the 'Photos' folder properly, but it does not seem to contain the functionality that is mentioned on various websites. From what I've read, you are supposed to create a folder within the 'Photos' folder. At that point, it becomes a public gallery for which you can access the context menu item Copy Public Gallery Link. However, I do not have this context menu item. I found a topic on the DropBox forum, but it is 2 years old. Even the various website information mentioned is over a year old. I found a topic from this year on SuperUser that mentions about the Share link menu item. I understand that this functionality is basically the same as Copy Public Gallery Link, however, the key difference is that it opens the website for which I need to click 'Copy link to this page' link, which then adds it to the clipboard. My primary questions are: Did they phase out the Copy Public Gallery Link from the context menu entirely? If not, is there a way to re-enable this menu item? If not, am I using the Photos folder properly? Lastly, if there is no way to get this menu item, is there some kind of script or hack-around that will let me simplify the Shark link + Copy link to this page functionality without requiring it to open my web browser? Note: I had to edit my post to remove other website links that I mention since I'm considered a 'New User' here and am only able to post 2 hyperlinks. Sorry.

    Read the article

  • How can I change the file name of the Program Files (x86) folder?

    - by Madrauk
    Let me stop you before the "you shouldn't do that" and "you will corrupt your file paths". I know what I'm doing, but I'll give you the story to convince you. Basically, my hard drive is failing to the point where programs installed on it are not responding consistently. So, in preparation for my replacement, I'm moving as many files as possible to my secondary because the new drive will be smaller (it's an emergency I can't buy a fancy expensive new drive) and so I can actually use my computer until the new drive comes in. The basic idea behind what I'm doing is I've copied the contents of the Program Files x86 folder to another spot on my other drive, and I want to replace the original folder in the C drive with a symbolic link that points to the other drive, so the programs can run from that drive and be fine but it will save space on my C drive and simplify the moving process when the new one comes in. To do that, I need to rename the program files folder, make the symbolic link, hopefully delete the program files folder, then restart my computer so all the programs are running properly and I can confirm it works. Now that I've told you why, can anyone help me out?

    Read the article

  • Is there any equivalence of `--depth immediates` in `git`?

    - by ???
    Currently, I'm try to setup git front-end to the Subversion repository. My Subversion repository is a single large repository which consists of several co-related projects: svn-root |-- project1 | |-- branches | |-- tags | `-- trunk |-- project2 | |-- branches | |-- tags | `-- trunk `-- project3 |-- branches |-- tags `-- trunk Because it's sometimes needs to move files between different projects, so I don't want to break the repository to separate ones. I'm going to use git-svn to setup a git front-end, but I don't see how to exactly mapping the svn to git structure. The two systems treat branches and tags very different and I doubt it is possible. To simplify the problem, I would just git svn clone the whole root directory and let branches/tags/trunk directories just sit there. But this will definitely result in too many files in branches and tags directories. In Subversion, it's easy to just set the depth of checkout to immediates, which will only checkout the branch/tag titles, without the directory contents. but I don't know if this can be done in git. The git-svn messed me up. I hope there's more elegant solution.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SOA Suite 11g Asynchronous Testing with soapUI

    - by Greg Mally
    Overview The Enterprise Manager test harness that comes bundled with SOA Suite 11g is a great tool for doing smoke tests and some minor load testing. When a more robust testing tool is needed, often times soapUI is leveraged for many reasons ranging from ease of use to cost effective. However, when you want to start doing some more complex testing other than synchronous web services with static content, then the free version of soapUI becomes a bit more challenging. In this blog I will show you how to test asynchronous web services with soapUI free edition. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/ Asynchronous Web Service Testing in soapUI When invoking an asynchronous web service, the caller must provide a callback for the response. Since our testing will originate from soapUI, then it is only natural that soapUI would provide the callback mechanism. This mechanism in soapUI is called a MockService. In a nutshell, a soapUI MockService is a simulation of a Web Service (aka, a process listening on a port). We will go through the steps in setting up the MockService for a simple asynchronous BPEL process. After creating your soapUI project based on an asynchronous BPEL process, you will see something like the following: Notice that soapUI created an interface for both the request and the response (i.e., callback). The interface that was created for the callback will be used to create the MockService. Right-click on the callback interface and select the Generate MockService menu item: You will be presented with the Generate MockService dialogue where we will tweak the Path and possibly the port (depends upon what ports are available on the machine where soapUI will be running). We will adjust the Path to include the operation name (append /processResponse in this example) and the port of 8088 is fine: Once the MockService is created, you should have something like the following in soapUI: This window acts as a console/view into the callback process. When the play button is pressed (green triangle in the upper left-hand corner), soapUI will start a process running on the configured Port that will accept web service invocations on the configured Path: At this point we are “almost” ready to try out the asynchronous test. But first we must provide the web service addressing (WS-A) configuration on the request message. We will edit the message for the request interface that was generated when the project was created (SimpleAsyncBPELProcessBinding > process > Request 1 in this example). At the bottom of the request message editor you will find the WS-A configuration by left-clicking on the WS-A label: Here we will setup WS-A by changing the default values to: Must understand: TRUE Add default wsa:Action: Add default wsa:Action (checked) Reply to: ${host where soapUI is running}:${MockService Port}${MockService Path} … in this example: http://192.168.1.181:8088/mockSimpleAsyncBPELProcessCallbackBinding/processResponse We now are ready to run the asynchronous test from soapUI free edition. Make sure that the MockService you created is running and then push the play button for the request (green triangle in the upper left-hand corner of the request editor). If everything is configured correctly, you should see the response show up in the MockService window: To view the response message/payload, just double-click on a response message in the Message Log window of the MockService: At this point you can now expand the project to include a Test Suite for some load balance tests etc. This same topic has been covered in various detail on other sites/blogs, but I wanted to simplify and detail how this is done in the context of SOA Suite 11g. It also serves as a nice introduction to another blog of mine: SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition.

    Read the article

  • SQL SERVER – 3 Challenges for DBA and Smart Solutions

    - by Pinal Dave
    Developer’s life is never easy. DBA’s life is even crazier. DBA’s Life When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day. DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day. Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them. Challenge #1 Synchronize Data Across Server A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL. Challenge #2 SQL Report Builder DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server. Challenge #3 Work with the OTHER Database The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries. Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL Tagged: Devart, SQL Tool

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • Keepin’ It Simple with StorageTek SL150

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are your customers archive and data protection environments getting out of hand?  Are they looking for a little simplicity in their lives? How about some scalability? Or are they looking for a way to save on capital and operational expenses? If you answered yes to any of these, then  Oracle's new StorageTek SL150 Modular Tape Library is the product for you. It beats the competition in terms of simplicity, scalability and savings, and provides some seriously wallet friendly revenue opportunities for you. If the long-term service annuities on the SL150 aren’t convincing enough, then the resale margins, rebates and follow-on revenue from modular upgrades will be!  The SL150 simplifies StorageTek’s tape portfolio by replacing three products with one scalable solution that  provides an entry point for repeat business within accounts. The SL150 expands your potential storage customer base to smaller companies with low cost, simple upgrades and streamlined management that help alleviate key customer pain points. With the SL150, your customers will be able to simplify growth of their archive and data protection environments with small entry configurations and 10x growth, something that would require multiple box swaps across up to three product categories with competitive products. With the SL150, Oracle can help you provide greater customer satisfaction with  Simplicity, Scalability and Savings! We know you’re probably wondering how you can get started and sell this new and magnificent product… Well, look no further because the only thing you need to do is complete the SL150 Guided Learning Paths (GLPs). For some extra insight, watch the video below on the new StorageTek SL150 modular tape library, and don’t forget to ‘tweet’ this post, and share it on Facebook to spread the good news! Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Wishing you Simplicity, Scalability and Savings, The OPN Communications Team

    Read the article

  • Teeing Off With Chris Leone at OpenWorld 2012

    - by Kathryn Perry
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Chris Leone, Senior Vice President, Oracle Applications Development Monday morning in downtown San Francisco - lots of sunshine, plenty of traffic, and sidewalks chocked full of people with fresh faces and blister free feet. Let the week of Oracle OpenWorld begin! For a great Applications start, Chris Leone packed the house with his Fusion Applications overview session - he covered strategy, scope, roadmaps, and customer successes. Fusion Apps, the world's best SaaS suite, is built on 100 percent standards. Chris talked about its information driven user experience, its innovative design, and the choice of deployment. People can run Fusion in the cloud, in a managed / hosted environment, or on premise -- or they can use a combination of these three models. About seventy percent of our customers go with SaaS. Release 5 of Fusion Apps will become available soon. The cadence of releases will be three times a year. The key drivers are to accelerate business success (no rip and replace) and to simplify business processes. Chris told the audience that organic Fusion is the centerpiece of our cloud solutions, rounded out with acquired offerings such as Taleo Recruiting and RightNow Customer Service. From the cloud solutions, customers can expect real time and predictive BI, social capabilities, choice of deployment, and more productivity because of a next generation UX called FUSE. Chris's demo showed a super easy, new UI that touts self service navigation. We'll blog about FUSE in the very near future. Chris said the next 365 days of Fusion Apps would include more localization, more industries, more power, more mobile, and more configurability. The audience was challenged to think hard about how Fusion could be part of their three-to-five year plans. Chris set up a great opportunity for you to follow up with your customers as they explore the possibilities.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >