Search Results

Search found 23658 results on 947 pages for 'mixed case'.

Page 464/947 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • IPv4 NameVirtualHost, IPv6 VirtualHost

    - by MadHatter
    Like many of us, I have an apache server (2.2.15, plus patches) with a lot of virtual hosts on it. More than I have IPv4 addresses, to be sure, which is why I use NameVirtualHost to run lots of them on the same IPv4 address. I'm busily trying to get everything I do IPv6-enabled. This server now has a routed /64, which gives me an awful lot of v6 addresses to throw around. What I'm trying to find is a simple way to tell each v4-NameVirtualHost that it should also function as a VirtualHost on a unique ipv6 address. I really, really don't want to have to define each virtual host twice. Does anyone know of an elegant way to do this? Or to do something comparable, in case I've embedded any dangerously-ignorant assumptions in my question?

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Updating ATI HD 5970 Graphics card - version errors?

    - by user55406
    I'm having an issue...My system specs is: Intel i7 960 6GM Corair XMS RAM ATI HD5970 graphics card Intel dx58so motherboard Cooler Master HAF 922 case 1.5TB Seagate hard drive Windows Vista x86 (32-bit). Here is my issue: when I go to AMD/ATI website to update my graphics card - it doesn't. when I type DxDiag and then click on display it tell me my version is 8.17.0 and its on 10.10.0 for the latest version. How can I get 8.17.0 too 10.10.0? I figure it would have done that after I updated the driver for my graphics card. Thanks.

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • Computer Says No: Mobile Apps Connectivity Messages

    - by ultan o'broin
    Sharing some insight into connectivity messages for mobile applications. Based on some recent ethnography done my myself, and prompted by a real business case, I would recommend a message that: In plain language, briefly and directly tells the user what is wrong and why. Something like: Cannot connect because of a network problem. Affords the user a means to retry connecting (or attempts automatically). Mobile context of use means users use anticipate interruptibility and disruption of task, so they will try again as an effective course of action. Tells the user when connection is re-established, and off they go. Saves any work already done, implicitly. (Bonus points on the ADF critical task setting scale) The following images showing my experience reading ADF-EMG Google Groups notification my (Android ICS) Samsung Galaxy S2 during a loss of WiFi give you a good idea of a suitable kind of messaging user experience for mobile apps in this kind of scenario. Inline connection lost message with Retry button Connection re-established toaster message The UX possible is dependent on device and platform features, sure, so remember to integrate with the device capability (see point 10 of this great article on mobile design by Brent White and Lynn Hnilo-Rampoldi) but taking these considerations into account is far superior to a context-free dumbed down common error message repurposed from the desktop mentality about the connection to the server being lost, so just "Click OK" or "Contact your sysadmin.".

    Read the article

  • Extend university wifi network [migrated]

    - by asfasdoiuh ouhouhouh
    i live in a university campus and i can get wifi signal on the outside of my window but not in the house. The solution i use at the moment is a usb wifi dongle outside connected to my laptop but the lack of an internal antenna make the connection quite unreliable at times. So i was trying to find another solution to improve the reception of my network. One idea is to setup a router on the outside (in a place with stronger signal) and redirect the connection inside the house with an ethernet cable but the problem is that our Uni Wifi is managed by a capitve portal (BlueSocket with DNS redirection to login page) and the authentication has to happen on the mac address that connect to the net (so the client appliance in this case). If I use a router with Mac-Clone capability i will be able to be redirected trough the captive portal on my laptop computer and login from there or i need to setup my router to fill in the login page by itself? There are other hardware/software solutions i can use to get what i want? Thank you all

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • Reverse lookup of inode/file from offset in raw device on linux and ext3/4?

    - by lilinjn
    In linux, given an offset into a raw disk device, is it possible to map back to an partition + inode? For example, suppose I know that string "xyz" is contained at byte offset 1000000 on /dev/sda: (e.g. xxd -l 100 -s 1000000 /dev/sda shows a dump that begins with "xyz") 1) How do I figure out which partition (if any) offset 1000000 is located in?(I imagine this is easy, but am including it for completeness) 2) Assuming the offset is located in a partition, how do I go about finding which inode it belongs to (or determine that it is part of free space) ? Presumably this is filesystem specific, in which case does any one know how to do this for ext4 and ext3?

    Read the article

  • How to reference individual cells in Excel to variable data from records in an external SQL table

    - by user273476
    I have a SQL table containing date oriented financial data eg. multiple daily records with fields for Date, Account code and Value. I want to set up dynamic links (formulas) from cells in an Excel speadsheet to this data so when the spreadsheet is loaded the data is fetched from all the relevant records. The spreadsheet has the Account codes on the x axis and Dates on the y. Each day the SQL table has new data in it for the new day and I want the spreadsheet to reference this new data for the column for the new day. Any ideas? I have seen how you can generally bring in data from a SQL table (in our case using ODBC as it is not MS SQL) but the data is not simply bringing in multiple records as you would a CVS file but specific records in the SQL table referencing to specific cells and columns in the spreadsheet.

    Read the article

  • Should I care about Junit redundancy when using setUp() with @Before annotation?

    - by c_maker
    Even though developers have switched from junit 3.x to 4.x I still see the following 99% of the time: @Before public void setUp(){/*some setup code*/} @After public void tearDown(){/*some clean up code*/} Just to clarify my point... in Junit 4.x, when the runners are set up correctly, the framework will pick up the @Before and @After annotations no matter the method name. So why do developers keep using the same conventional junit 3.x names? Is there any harm keeping the old names while also using the annotations (other than it makes me feel like devs do not know how this really works and just in case, use the same name AND annotate as well)? Is there any harm in changing the names to something maybe more meaningful, like eachTestMethod() (which looks great with @Before since it reads 'before each test method') or initializeEachTestMethod()? What do you do and why? I know this is a tiny thing (and may probably be even unimportant to some), but it is always in the back of my mind when I write a test and see this. I want to either follow this pattern or not but I want to know why I am doing it and not just because 99% of my fellow developers do it as well.

    Read the article

  • Run ClickOnce Application from CLI

    - by Badger
    I am working on an auto-install script for where I work and we have a ClickOnce type application we use from a vendor. I have looked into it and we can't automate the install but we would like to be able to at least start the install automatically. I have tried rundll32.exe dfshim.dll,ShOpenVerbApplication "%SOFTWARE%\ToolsApp.application" but it gives me an error about an invalid URI. What would probably be the easiest is to use whatever program Windows has (Windows XP in our case) to run the default "handler" for the file. I don't know if any such thing exists, but that is what comes to mind.

    Read the article

  • Basic Puppet installation with Solaris 11.2 beta

    - by user13366125
    At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry. However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish. Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately. (more)

    Read the article

  • Laptop white screen on power-up. Still displays via HDMI output

    - by Inno
    my wife's laptop recently started displaying a white screen. It doesn't show post or anything, just a white screen when it's powered on. However, it works normally with HDMI output to our television. I took it apart and fiddled with both ends of the display cable, but I either didn't fiddle correctly or that's just not the problem. I also noticed that the screen won't turn off anymore when the laptop is closed. Is there a name for the mechanism that controls this function, so I can try and locate it? My guesses are that the problem lies with the screen itself or the display cable, but I'm curious if there's anything else I might be overlooking. Also of note is that the left hinge is partially broken. The corner of the plastic computer case broke off, so the hinge is exposed and doesn't stay in place. I've tried holding it in place, wiggling it around, tapping various parts of the computer, but the white screen remains.

    Read the article

  • Logging violations of rules in limits.conf

    - by PaulDaviesC
    I am trying to log the details of the programs that where failed due to the limit cap defined in the limits.conf. My initial plan was to do it using the audit system. The idea was to track the system calls related to limits in the limits.conf that where failed. However the problem with this approach is that , it is not possible to track the violations of cpu time, since that violation do not involve failure of system calls. In the case of CPU time , one thing happens is that the program which violated the cpu time will be delivered a SIGXCPU. So my question is how should I go about logging the programs that violated CPU time? Also is there any limits.conf specific logs available?

    Read the article

  • Is version history really sacred or is it better to rebase?

    - by dukeofgaming
    I've always agreed with Mercurial's mantra, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it).

    Read the article

  • How to automount NTFS usb sticks on Xubuntu 12.10?

    - by netimen
    I'm running the Xubuntu 12.10 on a Lenovo T520 laptop. If I plug a FAT formatted usb stick, it's mounted automatically, but if I plug in a NTFS formatted one, I have to mount it manually. How to make NTFS usb sticks to mount automatically when plugged? My /etc/fstab in case it helps: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/sda1 / ext4 errors=remount-ro,user_xattr 0 1 # swap was on /dev/sda5 during installation UUID=cd221c3e-44a8-459e-9dfb-04787f1cd0b6 none swap sw

    Read the article

  • Ubuntu PPTP VPN to Microsoft Server Command Line ONLY

    - by supreme
    I try to setup a VPN Connection from a Ubuntu 12.04 LTS to Microsoft VPN Server (Ubuntu is the Client in this Case), but I only get this error message: .. connection failed! Check the log messages below for information why. Couldn't open the /dev/ppp device: Operation not permitted FATAL: Module ppp_generic not found./usr/sbin/pppd: Sorry - this system lacks PPP kernel support Details you may need: modprobe -v ppp > FATAL: Module ppp not found. uname -r -> 2.6.32-042stab076.8

    Read the article

  • Delivery Status Notification (Relay) in Exchange Server 2007 with original email attachment

    - by Nick Kavadias
    I have recently setup Exhchange Server 2007. The server is smarthosting outgoing messages. Users have 'request delivery receipt' on by default their 'auditing' purposes in Outlook. They would like the original email attached to the delivery notification as was the case in Exchange Server 2003. I need this same functionality in 2007. The question has been asked here, here and here but cannot find a valid solution. Here's some information about the functionality in Exchange 2003. The question is, can i replication this functionality in 2007? Here is what a 2007 delivery message looks like: I know it's possible to customize DSN's. Can I make a custom DSN for this type of message and have the original included as an attachment? Anyone got any other ideas?

    Read the article

  • Diff and ignore lines missing in one file

    - by Millianz
    I want to diff two files and ignore lines that are present in one file but missing in the other. For example File1: foo bar baz bat File2: foo ball bat I'm currently running the following diff command diff File1 File2 --changed-group-format='%>' --unchanged-group-format='' Which in this case would produce bar baz as the output, i.e. only missing or conflicting lines. I would like to only print conflicting lines, i.e. ignore cases where one line is missing from File2 and is present in File1 (not the other way around). Is there any way to do something like this using diff or do I have to resort to other tools? If so, what would you recommend?

    Read the article

  • Problem in multi booting Ubuntu 12.04 with existing Windows XP, 7 and 8 in 500GB HDD with 5 Partitions

    - by Dhruva
    Here's my case. I have 500GB HDD with 5 Partitions with XP, Windows7 and Windows8 RP in the first three. As per one of the instruction I've seen in this forum, I did shrink my 4th Partition to create a 30GB unallocated free space to install Ubuntu 12.04. But, when next I'm trying to boot the Ubuntu CD and choosing "Something Else", its only recognizing my 500GB HDD in whole as "/sda" and not reading the free 30GB space separately to install Ubuntu in it as suggested in the instruction mentioned in this forum. I've also tried to install in from within Windows7, by mounting the Ubuntu ISO file and using the .exe file and instruction thereupon (choosing free drive, user name, installation size, etc.), but that also failed after the PC restarted to continue the installation, showing as error for file extension, partition something error. One thing to be noted that the PC I'm trying to install Ubuntu in it is my Home PC and doesn't have any internet connection. Hence, no updates or otherwise online help. What shall I do?? Kindly suggest. Sorry if I made some grammatical mistakes as English is not my first language. Thanks in advance.

    Read the article

  • Ping myself, works with ipv6 not ipv4 in Windows 7

    - by user68546
    Hi! I've tried to solve the following problem with no luck and I need some proffesional help. The following is possible: Ping all computers (that I tried) in the domain without problem. Ping myself with localhost which use ::1. Ping myself with my given ipv6 IP. Internet access. The following is not possible: Noone can ping me (request timeout) with computername/ipv4/ipv6. I cannot ping myself with my given ipv4 IP or 127.0.0.1 (request timeout). Tried to enable/disable TCP/IPv4. Same issue. Turned off windows firewall. Added an inbound rule to allow icmp (just in case). Same same.. Is there someone out there that has any idea what the issue could be? Any help would be most appreciated!

    Read the article

  • When NOT to use a framework

    - by Chris
    Today, one can find a framework for just about any language, to suit just about any project. Most modern frameworks are fairly robust (generally speaking), with hour upon hour of testing, peer reviewed code, and great extensibility. However, I think there is a downside to ANY framework in that programmers, as a community, may become so reliant upon their chosen frameworks that they no longer understand the underlying workings, or in the case of newer programmers, never learn the underlying workings to begin with. It is easy to become specialized to a degree that you are no longer a 'PHP programmer' (for example), but a "Drupal programmer", to the exclusion of anything else. Who cares, right? We have the framework! We don't need to know how to "do it by hand"! Right? The result of this loss of basic skills (sometimes to the extent that programmers who don't use frameworks are viewed as "outdated") is that it becomes common practice to use a framework where it is not required or appropriate. The features the framework facilitates wind up confused with what the base language is capable of. Developers start using frameworks to accomplish even the most basic of tasks, so that what once was considered a rudimentary process now involves large libraries with their own quirks, bugs, and dependencies. What was once accomplished in 20 lines is now accomplished by including a 20,000 line framework AND writing 20 lines to use the framework. Conversely, one does not want to reinvent the wheel. If I'm writing code to accomplish some basic, common little task, I might feel like I am wasting my time when I know that framework XYZ offers all the features I am after, and a whole lot more. The "whole lot more" part still has me worried, but it doesn't seem that many even consider it anymore. There has to be a good metric to determine when it is appropriate to use a framework. What do you consider the threshold to be, how do you decide when to use a framework, or, when not.

    Read the article

  • How do I configure namecheap for "arbitrarily-nested" wildcard subdomains?

    - by rabidsnail
    I'm trying to set up something like nyud.net, where any arbitrary chain of subdomains resolves to the same CNAME record (which in my case points to an amazon elastic load balancer). Ex: www.gogle.com.nyud.net:8080 points to one of their cache servers, which looks at the HOST header and returns www.google.com. I'm using namecheap as my dns host. Adding a CNAME record for *.mydomain.com doesn't seem to do anything (nslookup gives NXDOMAIN for all subdomains). What do I have to do to set this up? Do I have to use something fancier than namecheap (like route53)?

    Read the article

  • Deploying InfoPath forms &ndash; idiosyncrasies

    - by PointsToShare
    Well, I have written a sophisticated PowerShell script to expedite the deployment of InfoPath forms - .XSN file.  Along the way by way of trial and error (mostly error and error), I discovered a few little things. Here they are. •    Regardless of how the install command is run – PowerShell or the GUI in Central Admin – SharePoint enwraps the XSN inside a solution – WSP, then installs and deploys the solution. •    The solution is named by concatenating “form-“ with the first 16 characters (or less if the file name is shorter than 16) of the file name and the required WSP at the end. So if the form name was MyInfopathForm.xsn the solution name will be form-MyInfopathForm.wsp, but for WithdrawalOfRequestsForRefund.xsn it will be named form-WithdrawalOfRequ.wsp •    It only gets worse! Had there already been a solution file with the same name, Microsoft appends a three digit number to the name, like MyInfopathForm-123.wsp. Remember a digit is a finger, I suspect a middle finger, so when you deploy the same form – many versions of it, or as it was in my case – testing a script time and again, you’ll end up with many such digit (middle finger) appended solutions, all un-deployed except the last one. This is not a bug. It’s a feature!   Well, there are ways around it. When by hand, remove the solution from the solution store before deploying the form again. In the script I do the same thing. And finally - an important caveat; Make sure that all your form names are unique in the first 16 characters. If you also have a form with the name forWithdrawalOfRequestForRelief.xsn, you’re in trouble! That’s all folks!

    Read the article

  • Quantify value for management

    - by nivlam
    We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: Easier maintenance Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >