Search Results

Search found 881 results on 36 pages for 'verbose'.

Page 23/36 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Thoughts on my new template language/HTML generator?

    - by Ralph
    I guess I should have pre-faced this with: Yes, I know there is no need for a new templating language, but I want to make a new one anyway, because I'm a fool. That aside, how can I improve my language: Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed at the closing brace. If no brace is used, they will be closed after taking one element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). You will also be able to include partial templates like: for(@item in @item_list) include("item_partial", item=@item) Something like that I'm thinking. The first argument will be the name of the template file, and subsequent ones will be named arguments where @item gets the variable name "item" inside that template. I also want to have a collection version like RoR has, so you don't even have to write the loop. Thoughts on this and exact syntax would be helpful :) Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? This would make it more clear that the "tag" isn't behaving like just a normal tag that will get replaced with content, but controls the flow. Also, it would allow name-reuse. Do you like the attribute syntax? (round brackets) How should I do template inheritance/layouts? In Django, the first line of the file has to include the layout file, and then you delimit blocks of code which get stuffed into that layout. In CakePHP, it's kind of backwards, you specify the layout in the controller.view function, the layout gets a special $content_for_layout variable, and then the entire template gets stuffed into that, and you don't need to delimit any blocks of code. I guess Django's is a little more powerful because you can have multiple code blocks, but it makes your templates more verbose... trying to decide what approach to take Filtered variables inside quotes: "xxx {@var|filter} yyy" "xxx @{var|filter} yyy" "xxx @var|filter yyy" i.e, @ inside, @ outside, or no braces at all. I think no-braces might cause problems, especially when you try adding arguments, like @var|filter:arg="x", then the quotes would get confused. But perhaps a braceless version could work for when there are no quotes...? Still, which option for braces, first or second? I think the first one might be better because then we're consistent... the @ is always nudged up against the variable. I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   >requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919 requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report***** To change the default rotation or size of output, these can be set as configuration variables for the server: RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20) If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request.  >requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs) What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service. >requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs) You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert. RequestAuditAdditionalVerboseFieldsList=TotalRows,path In this case, for any search results, the number of items the user found is traced: >requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta... I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list: RequestAuditAdditionalVerboseFieldsList=ClientToken And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

    Read the article

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • 11.10 desktop alerts (volume change and terminal bell) stopped working but all other audio still works

    - by FlabbergastedPickle
    All, My sound works just fine in 11.10 64-bit install on HP dm1-4050 Sandy Bridge notebook (e.g. audio works in Banshee, flash, games, browser, Thunderbird email notification, etc.), but the core desktop notifications (e.g. pressing a tab in a terminal where there is more than one option should trigger a terminal bell, or changing volume using volume keys should be accompanied with the supporting "quack" that the volume app makes) do not work. I've intentionally disabled login sound as explained here on ask ubuntu but even enabling it back makes no difference. These notifications did work before just fine and I am not sure when did the actually stop working but it must've been fairly recently. Only things I did were trying to install some ppa edge xorg drivers for my intel card (a separate issue) but also reverted them all with ppa-purge once I discovered they did not improve anything. Other thing I did was check volume settings with alsamixer and did alsactl store for the soundcard after I did some experimenting with volume settings for PCM (on my laptop PCM at 100% crackles so I had to lower it and make pulseaudio ignore its setting as per ask ubuntu's page). That said, neither of these should have any bearing on the said notifications since the volume is up and they clearly work everywhere else but the core desktop events. The system ready drum sound when Ubuntu boots and user reaches the login screen also does not work. The guest login behaves exactly same as mine. Audio works (including the login sound since I've not disabled it for the guest account), but no quacks when changing the volume or terminal bell sounds... I've tried copying ubuntu sounds to /usr/share/sounds/ as suggested on ask ubuntu and that did not work. I also tried using dconf-editor to check sound theme settings and tried both freedesktop (which is what it was set to) and ubuntu, as suggested on ask ubuntu. This did not work either. I tried purging the ~/.pulse folder and the /tmp/*pulse* entries, rebooting and restarting pulseaudio with -D flag. While audio came back on and behaved just fine in all aspects (e.g. one can adjust volume levels, play music, games, in-browser sound stuff, and other app alerts) except for the system ready drum sound (at the login screen), and any system event (terminal bell and volume change quack sound). It is interesting that the quack sound works inside system settings-sound when adjusting levels there, but it does not when volume is changed via top bar's volume settings... I do recall that at one point yesterday when I was restarting pulseaudio the quacks that accompany volume change did start working but I have no idea what caused that. This was also when I first realized those alerts were not working. After rebooting it was again gone. I did compile my own 3.0.14-rt31 kernel a little while ago as instructed on one of the wiki's for the 11.10 rt kernel. Everything works as before except for the said sound alerts. I am not sure if this began happening since I started using the rt kernel though and yesterday's momentary ability to hear those quacks while changing the volume make me believe that the kernel is not one responsible for this problem. One more thing I can think of is that I used alsoft-conf tool to configure buffering on the OpenAL (due to TA Spring's choppy audio) and changed in there default audio device to ALSA. I also tried reverting it to Pulseaudio as the only allowed output but the bottom part of the Backend tab always reverts to ALSA even when I select Pulseaudio. The pulseaudio does remain as the only active choice on top. This, however, once again does not make any sense in terms of preventing desktop audio alerts when everything else including OpenAL games plays sound just fine... So, there you have it, as verbose as I could make it :-). I tried all I could find on this issue and had no luck so far... Any ideas?

    Read the article

  • disks not ready in array causes mdadm to force initramfs shell

    - by RaidPinata
    Okay, this is starting to get pretty frustrating. I've read most of the other answers on this site that have anything to do with this issue but I'm still not getting anywhere. I have a RAID 6 array with 10 devices and 1 spare. The OS is on a completely separate device. At boot only three of the 10 devices in the raid are available, the others become available later in the boot process. Currently, unless I go through initramfs I can't get the system to boot - it just hangs with a blank screen. When I do boot through recovery (initramfs), I get a message asking if I want to assemble the degraded array. If I say no and then exit initramfs the system boots fine and my array is mounted exactly where I intend it to. Here are the pertinent files as near as I can tell. Ask me if you want to see anything else. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions # CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 13 Nov 2012 13:50:41 -0700 # by mkconf $Id$ ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae Here is fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdc2 during installation UUID=3fa1e73f-3d83-4afe-9415-6285d432c133 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdc3 during installation UUID=c4988662-67f3-4069-a16e-db740e054727 none swap sw 0 0 # mount large raid device on /data /dev/md0 /data ext4 defaults,nofail,noatime,nobootwait 0 0 output of cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sda[0] sdd[10](S) sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdb[1] 23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none> Here is the output of mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae devices=/dev/sda,/dev/sdb,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdd Please let me know if there is anything else you think might be useful in troubleshooting this... I just can't seem to figure out how to change the boot process so that mdadm waits until the drives are ready to build the array. Everything works just fine if the drives are given enough time to come online. edit: changed title to properly reflect situation

    Read the article

  • How to fix: Ubuntu 12.04 reboots after loading with elilo

    - by Casey
    I have an HP p6-2120 with CPU: AMD A6-3620 APU with Radeon Graphics RAM: 6GB BIOS: HO2_710.ROM v7.10 [AMI v7.10 4/19/2012] Disk: SATA1 (/dev/sda) - 1 TB (windows) Disk: SATA2 (/dev/sdb) - 1 TB partitioned using "parted -a optimal /dev/sdb" as follows: .. 1049KB 201MB FAT32 boot flag set .. 201MB 60GB ext2 (/) .. 68GB 78GB linux-swap(v1) (swap) .. 78GB 790GB ext4 (/home) .. - rest is "free" space reserved for other purposes (eventually) ubuntu: 12.04.1 LTS [specifically: Release 12.04 (precise) 64-bit] kernel: linux 3.2.0-29-generic I created a bootable EFI USB from the ISO (64-bit) which I downloaded. I can run and install from the USB without any problems. The BIOS is an EFI bios that appears to be capable of booting in either EFI or Legacy mode. Initially, I did the "standard" install with NOTHING on disk2, and let the installer configure everything. The net result of this was that when I started the computer and forced it into "boot" menu mode, it DOES NOT recognize SATA2 as an EFI drive, and when I attempt to "legacy" boot from it, I get the message "ERROR: No Boot Disk has been detected." The "standard" install created one large partition that consumed the entire disk. At that point, I manually partitioned the disk (using sudo parted -a optimal /dev/sdb) as described above. I selected the "other" install, and changed the /dev/sdb1 to "bios_grub", /dev/sdb2 as "/" (ext4), /dev/sdb3 as swap, and /dev/sdb4 as "/home". [Note: fearing that possibly elilo did not recognize ext4, I switched /dev/sdb2 to ext2 and re-insalled] The net result was that the install appeared to trash the /dev/sdb1 partition so that it was NOT readable by anything. I re-formated /dev/sdb1 as FAT32 and set the boot flag. I repeated the install ignoring the messages about no bios_grub partition. After several attempts to get GRUB2 to work, I switched to elilo. I downloaded the most recent version and copied it (elilo-3.14-ia64.efi) to /dev/sdb1/efi/boot/bootx64.efi. (The BIOS boot loader did not recognize it either as elilo-3.14.ia64.efi or as elilo.efi. Based on the advice in one of the web-pages I found, I renamed it to bootx64.efi. This worked.) In that same directory (/efi/boot), I copied the file pointed to the link in /dev/sdb2/vmlinuz to /efi/boot/vmlinuz, and the file pointed to the link in /dev/sdb2/initrd.img to /efi/boot/initrd.img. I created an elilo.conf file as follows: timeout=5000 prompt default=linux-boot image=vmlinuz label=linux-boot read-only initrd=initrd.img root=/dev/sdb2 The /efi/boot directory contains 4 files: bootx64.efi elilo.conf vmlinuz initrd.img When I power-cycle the computer and force the boot menu, drive2 shows up as an EFI bootable drive. When I select it, I get the elilo prompt. Pressing , it appears to load the kernal (I have tried it with verbose=5, and there is a long string of messages with the final one a command line to load the kernel and a series of several dots that fly by) then the screen goes blank, and it reboots the computer. [Note: I have also tried substituting the UUID as found in the /etc/fstab of the installed system for the root directory. This had no effect.] This is a brief synopsis of several nights of fiddling with this. I would deeply appreciate any help you can give.

    Read the article

  • In hindsight, is basing XAML on XML a mistake or a good approach?

    - by romkyns
    XAML is essentially a subset of XML. One of the main benefits of basing XAML on XML is said to be that it can be parsed with existing tools. And it can, to a large degree, although the (syntactically non-trivial) attribute values will stay in text form and require further parsing. There are two major alternatives to describing a GUI in an XML-derived language. One is to do what WinForms did, and describe it in real code. There are numerous problems with this, though it’s not completely advantage-free (a question to compare XAML to this approach). The other major alternative is to design a completely new syntax specifically tailored for the task at hand. This is generally known as a domain-specific language. So, in hindsight, and as a lesson for the future generations, was it a good idea to base XAML on XML, or would it have been better as a custom-designed domain-specific language? If we were designing an even better UI framework, should we pick XML or a custom DSL? Since it’s much easier to think positively about the status quo, especially one that is quite liked by the community, I’ll give some example reasons for why building on top of XML might be considered a mistake. Basing a language off XML has one thing going for it: it’s much easier to parse (the core parser is already available), requires much, much less design work, and alternative parsers are also much easier to write for 3rd party developers. But the resulting language can be unsatisfying in various ways. It is rather verbose. If you change the type of something, you need to change it in the closing tag. It has very poor support for comments; it’s impossible to comment out an attribute. There are limitations placed on the content of attributes by XML. The markup extensions have to be built "on top" of the XML syntax, not integrated deeply and nicely into it. And, my personal favourite, if you set something via an attribute, you use completely different syntax than if you set the exact same thing as a content property. It’s also said that since everyone knows XML, XAML requires less learning. Strictly speaking this is true, but learning the syntax is a tiny fraction of the time spent learning a new UI framework; it’s the framework’s concepts that make the curve steep. Besides, the idiosyncracies of an XML-based language might actually add to the "needs learning" basket. Are these disadvantages outweighted by the ease of parsing? Should the next cool framework continue the tradition, or invest the time to design an awesome DSL that can’t be parsed by existing tools and whose syntax needs to be learned by everyone? P.S. Not everyone confuses XAML and WPF, but some do. XAML is the XML-like thing. WPF is the framework with support for bindings, theming, hardware acceleration and a whole lot of other cool stuff.

    Read the article

  • At what point does "constructive" criticism of your code become unhelpful?

    - by user15859
    I recently started as a junior developer. As well as being one of the least experienced people on the team, I'm also a woman, which comes with all sorts of its own challenges working in a male-dominated environment. I've been having problems lately because I feel like I am getting too much unwarranted pedantic criticism on my work. Let me give you an example of what happened recently. Team lead was too busy to push in some branches I made, so he didn't get to them until the weekend. I checked my mail, not really meaning to do any work, and found that my two branches had been rejected on the basis of variable names, making error messages more descriptive, and moving some values to the config file. I don't feel that rejecting my branch on this basis is useful. Lots of people were working over the weekend, and I had never said that I would be working. Effectively, some people were probably blocked because I didn't have time to make the changes and resubmit. We are working on a project that is very time-sensitive, and it seems to me that it's not helpful to outright reject code based on things that are transparent to the client. I may be wrong, but it seems like these kinds of things should be handled in patch type commits when I have time. Now, I can see that in some environments, this would be the norm. However, the criticism doesn't seem equally distributed, which is what leads to my next problem. The basis of most of these problems was due to the fact that I was in a codebase that someone else had written and was trying to be minimally invasive. I was mimicking the variable names used elsewhere in the file. When I stated this, I was bluntly told, "Don't mimic others, just do what's right." This is perhaps the least useful thing I could have been told. If the code that is already checked in is unacceptable, how am I supposed to tell what is right and what is wrong? If the basis of the confusion was coming from the underlying code, I don't think it's my responsibility to spend hours refactoring a whole file that someone else wrote (and works perfectly well), potentially introducing new bugs etc. I'm feeling really singled out and frustrated in this situation. I've gotten a lot better about following the standards that are expected, and I feel frustrated that, for example, when I refactor a piece of code to ADD error checking that was previously missing, I'm only told that I didn't make the errors verbose enough (and the branch was rejected on this basis). What if I had never added it to begin with? How did it get into the code to begin with if it was so wrong? This is why I feel so singled out: I constantly run into this existing problematic code, that I either mimic or refactor. When I mimic it, it's "wrong", and if I refactor it, I'm chided for not doing enough (and if I go all the way, introducing bugs, etc). Again, if this is such a problem, I don't understand how any code gets into the codebase, and why it becomes my responsibility when it was written by someone else, who apparently didn't have their code reviewed. Anyway, how do I deal with this? Please remember that I said at the top that I'm a woman, and I'm sure these guys don't usually have to worry about decorum when they're reviewing other guys' code, but honestly that doesn't work for me, and it's causing me to be less productive. I'm worried that if I talk to my manager about it, he'll think I can't handled the environment, etc.

    Read the article

  • Software Center seems to freeze system when installing, syslog has "blocked for more than 120 seconds" errors

    - by nbm
    12.04 (precise) 64-bit Kernel Linux 3.2.0-39 3.6GB memory Intel Core 2 Duo CPU @ 2.40GHz x2 WUBI-installed Ubuntu running on a MacBook Pro 7.1 with OSX running Vista via Boot Camp (hey, I like lots of OS's m'kay?) When installing from Ubuntu software center my system very frequently freezes. This has happened 4 of the last 5 installs. Most recently I was installing the Google Earth .deb from Google's website: clicking the .deb file automatically opens Software Center (otherwise I would have used Synaptic, as I've grown to expect Software Center to freeze my system and I'm rather tired of it.) By "freeze" I mean nothing works: no dash, no launcher, no mouse movement, no alt-tab, can't open terminal (keyboard does not work). Software center does show the "installing" icon but after that it greys out and I can't click anything. REISUB has no effect but a cold power-down and restart is possible. Occasionally, after 5-10 minutes, I'll be able to move the mouse / use the keyboard and run a launcher command or two, although other open apps (Chrome and Software Center) will still be greyed-out/frozen. (I've never waited longer than that - if still unresponsive after 15 minutes I just power down and restart.) Most recently, which is why I am finally posting a question, I waited about 15 minutes and was finally able to open System Monitor while this was going on. Processes tells me that System Monitor is using about 20% of CPU, and nothing else is using much (zeros mostly). In fact I didn't even see Software Center listed? However at this point the system finally partially unfroze, the installation completed, and while I wasn't about to close Software Center I was able to do a system shutdown and fresh restart and I went and took a look at the syslog. In /var/log/syslog I see a lot of ":blocked for more than 120 seconds" messages. Similar to ubuntu hang out with this message :blocked for more than 120 seconds Which has not been answered, and I'm not running a virtual machine. My full syslog with stack traces looks very, very similar to this: Why do tasks on Amazon Xen instance block for over 120 seconds causing server to hang? Note that that question was solved, but that's because the problem was being caused by Amazon and Amazon fixed the bug. I'm not running anything Amazon-related. My syslog does look very similar, however. My question is also similar to this: Troubleshooting server hang But the referenced "duplicate" in that question is about how to kill processes/restart when the system freezes. I know how to kill processes and restart. I want to figure out what is causing the problem so I can try to fix it. I realize that I could just use Synaptic instead of Ubuntu Software Center, but I'd like to try to solve the problem if possible. I'm thinking I should perhaps submit a bug report, but I wanted to first see if anyone else was having any similar problems, and if so what you all did to fix it. I see a number of questions about Software Center freezing and others, including those I linked, about the "blocked for more than 120 seconds" log error, but I didn't see any question that links the two. I did save a copy of the syslog report if anyone wants to see it, but as mentioned it's quite similar to the one posted in the Amazon-related question...and I didn't want to take up even more space unnecessarily as, my apologies - this question has already become extremely verbose!

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • Javascript Inheritance Part 2

    - by PhubarBaz
    A while back I wrote about Javascript inheritance, trying to figure out the best and easiest way to do it (http://geekswithblogs.net/PhubarBaz/archive/2010/07/08/javascript-inheritance.aspx). That was 2 years ago and I've learned a lot since then. But only recently have I decided to just leave classical inheritance behind and embrace prototypal inheritance. For most of us, we were trained in classical inheritance, using class hierarchies in a typed language. Unfortunately Javascript doesn't follow that model. It is both classless and typeless, which is hard to fathom for someone who's been using classes the last 20 years. For the last two or three years since I've got into Javascript I've been trying to find the best way to force it into the class model without much success. It's clunky and verbose and hard to understand. I think my biggest problem was that it felt so wrong to add or change object members at run time. Every time I did it I felt like I needed a shower. That's the 20 years of classical inheritance in me. Finally I decided to embrace change and do something different. I decided to use the factory pattern to build objects instead of trying to use inheritance. Javascript was made for the factory pattern because of the way you can construct objects at runtime. In the factory pattern you have a factory function that you call and tell it to give you a certain type of object back. The factory function takes care of constructing the object to your specification. Here's an example. Say we want to have some shape objects and they have common attributes like id and area that we want to depend on in other parts of your application. So first thing to do is create a factory object and give it a factory method to create an abstract shape object. The factory method builds the object then returns it. var shapeFactory = { getShape: function(id){ var shape = { id: id, area: function() { throw "Not implemented"; } }; return shape; }}; Now we can add another factory method to get a rectangle. It calls the getShape() method first and then adds an implementation to it. getRectangle: function(id, width, height){ var rect = this.getShape(id); rect.width = width; rect.height = height; rect.area = function() { return this.width * this.height; }; return rect;} That's pretty simple right? No worrying about hooking up prototypes and calling base constructors or any of that crap I used to do. Now let's create a factory method to get a cuboid (rectangular cube). The cuboid object will extend the rectangle object. To get the area we will call into the base object's area method and then multiply that by the depth. getCuboid: function(id, width, height, depth){ var cuboid = this.getRectangle(id, width, height); cuboid.depth = depth; var baseArea = cuboid.area; cuboid.area = function() { var a = baseArea.call(this); return a * this.depth; } return cuboid;} See how we called the area method in the base object? First we save it off in a variable then we implement our own area method and use call() to call the base function. For me this is a lot cleaner and easier than trying to emulate class hierarchies in Javascript.

    Read the article

  • How do I properly host a WCF Data Service in IIS? Why am I getting errors?

    - by j0rd4n
    I'm playing around with WCF Data Services (ADO.NET Data Services). I have an entity framework model pointed at the AdventureWorks database. When I debug my svc file from within Visual Studio, it works great. I can say /awservice.svc/Customers and get back the ATOM feed I expect. If I publish the service (hosted in an ASP.NET web application) to IIS7, the same query string returns a 500 fault. The root svc page itself works as expected and successfully returns ATOM. The /Customers path fails. Here is what my grants look like in the svc file: public class AWService : DataService<AWEntities> { public static void InitializeService( DataServiceConfiguration config ) { config.SetEntitySetAccessRule( "*", EntitySetRights.All ); config.SetServiceOperationAccessRule( "*", ServiceOperationRights.All ); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Update: I enabled verbose errors and get the following in the XML message: <innererror> <message>The underlying provider failed on Open.</message> <type>System.Data.EntityException</type> <stacktrace> at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf( ... ... <internalexception> <message> Login failed for user 'IIS APPPOOL\DefaultAppPool'. </message> <type>System.Data.SqlClient.SqlException</type> <stacktrace> at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, ...

    Read the article

  • SGEN doesn't work after upgrading from VS2008 to VS2010

    - by emddudley
    I just recently upgraded a VS2008/.NET 3.5 SP1 project to VS2010 and .NET 4. I have a post-build event which calls SGEN to generate the XmlSerializers assembly. Whenever I try to run it I get the following error. "C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\sgen.exe" /debug /force /verbose /c:"platform:x86" "C:\path\to\SomeAssembly.dll" Microsoft (R) Xml Serialization support utility [Microsoft (R) .NET Framework, Version 2.0.50727.3038] Copyright (C) Microsoft Corporation. All rights reserved. Error: An attempt was made to load an assembly with an incorrect format: c:\path\to\someassembly.dll. - Could not load file or assembly 'file:///c:\path\to\someassembly.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. If you would like more help, please type "sgen /?". I get the same error running SGEN from the command line, but I can't figure out what the problem is. Any ideas?

    Read the article

  • Installing The ruby-gmail rubygem on Mac OS Snow Leopard

    - by johnnygoodman
    I'm working off these instructions: http://github.com/dcparker/ruby-gmail From the home directory I do a standard install and good stuff happens: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ sudo gem install ruby-gmail Successfully installed ruby-gmail-0.2.1 1 gem installed Installing ri documentation for ruby-gmail-0.2.1... Installing RDoc documentation for ruby-gmail-0.2.1... I head over to my ~/www dir where I run scripts that include other rubygems successfully and create a gmail directory. I create a script that includes rubygems and gmail, but does nothing else: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ pwd /Users/johnnygoodman/www/gmail Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ls test-send.rb Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ cat test-send.rb require 'rubygems' require 'gmail' I run this script and the errors begin: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ruby test-send.rb /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- mime/message (LoadError) from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail/message.rb:1 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail.rb:168 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `require' from test-send.rb:2 Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ Here's my gem env: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ gem environment RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.8.7 (2009-06-08 patchlevel 173) [universal-darwin10.0] - INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8 - RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - universal-darwin-10 - GEM PATHS: - /Library/Ruby/Gems/1.8 - /Users/johnnygoodman/.gem/ruby/1.8 - /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com The path that the errors give when I run the script is not the same as the GEM PATHS given in the env output. However, I don't know how to make them match or if that's the significant thing here.

    Read the article

  • Problems using dual resolver

    - by user315228
    Hi, I ma using dual resolver and having a problem. Following is what i get when i run through ant in debug and verbose mode([http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]]) [ivy:retrieve] resolved ivy file produced in c:\temp\ivy\[email protected] [ivy:retrieve] :: downloading artifacts :: [ivy:retrieve] [NOT REQUIRED] config#ego;4.3.1!ego.conf [ivy:retrieve] trying [http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] tried [http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] HTTP response status: 404 url=[http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] CLIENT ERROR: Not Found url=[http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] ibiblio: resource not reachable for axis2#axis2;working@commons-lang: res=[http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] WARN: [NOT FOUND ] axis2#axis2;working@commons-lang!axis2.jar (235ms) [ivy:retrieve] WARN: ==== commons-lang: tried [ivy:retrieve] WARN: ==== ibiblio: tried [ivy:retrieve] WARN: [http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] [NOT REQUIRED] axis#axis-saaj;1.4!axis-saaj.jar [ivy:retrieve] [NOT REQUIRED] axis#axis-wsdl4j;1.5.1!axis-wsdl4j.jar Can you please tell me what is wrong with my ivysetting file or wrong with ivy file? Following is excerpt from ivysettings.xml <dual name="dual4" <filesystem name="commons-lang" <ivy pattern="${localRepositoryLocation}/[module]/ivy/ivy.xml"/ </filesystem <ibiblio name="ibiblio" m2compatible="true" usepoms="false" / </dual The problem (may be) is for each and every dependancy that i defined i have seperate ivy.xml. and just one reolver as above? like just for an exampe, for axis2.jar i have two dependancies in another ivy.xml, the dependencies are axis-saaj and axis-wsdl4j. Please help Thanks, Almas

    Read the article

  • Ruby Gems Not Installing, Hangs While Getting Gems

    - by Tim Hoolihan
    I recently cleared out all of my ruby install and installed form sources using the instructions at hivelogic I have have been able to install a few gems, but most of the time, "sudo gem install rails" hangs. I've added the -V flag, and it just seems to hang, I don't get any error. And the process can not be killed. I can only reboot to kill the process. My ruby info: [tim@ ~]# ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10.2.0] [tim@ ~]# gem -v 1.3.6 [tim@ ~]# gem environment RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/local/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/local/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/local/lib/ruby/gems/1.8 - /Users/tim/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://gems.rubyforge.org/", "http://gems.rubyforge.org"] - REMOTE SOURCES: - http://gems.rubyforge.org/ - http://gems.rubyforge.org [tim@ ~]# which ruby /usr/local/bin/ruby [tim@ ~]# which gem /usr/local/bin/gem [tim@ ~]# uname -a Darwin tim-hoolihans-macbook-pro-15.local 10.2.0 Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 [tim@ ~]# Any ideas?

    Read the article

  • Set margins in a LinearLayout programmatically.

    - by Timmmm
    I'm trying to use Java (not XML) to create a LinearLayout with buttons that fill the screen, and have margins. Here is code that works without margins: LinearLayout buttonsView = new LinearLayout(this); buttonsView.setOrientation(LinearLayout.VERTICAL); for (int r = 0; r < 6; ++r) { Button btn = new Button(this); btn.setText("A"); LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.FILL_PARENT, LinearLayout.LayoutParams.FILL_PARENT); // Verbose! lp.weight = 1.0f; // This is critical. Doesn't work without it. buttonsView.addView(btn, lp); } ViewGroup.LayoutParams lp = new ViewGroup.LayoutParams(ViewGroup.LayoutParams.FILL_PARENT, ViewGroup.LayoutParams.FILL_PARENT); setContentView(buttonsView, lp); So that works fine, but how on earth do you give the buttons margins so there is space between them? I tried using LinearLayout.MarginLayoutParams, but that has no weight member so it's no good. And it doesn't work if you pass it lp in its constructor either. Is this impossible? Because it sure looks it, and it wouldn't be the first Android layout task you can only do in XML.

    Read the article

  • Hot to configure EnterpriseLibrary.Logging to only output message and timestamp

    - by thomas nn
    I am trying to log a simple string to a log file. I only need Category, message and timestamp. Thus I have configured app.config like this: <listeners> <add name="Flat File Destination" fileName="C:\Log\Phoenix.Common.Tests\Debug.log" header="-----------------------------------------------------------------" formatter="Text Formatter" footer="" rollFileExistsBehavior="Increment" rollInterval="Midnight" rollSizeKB="0" timeStampPattern="yyyy-MM-dd" listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> </listeners> <formatters> <add name="Text Formatter" template="Category: {category} Message: {message} Timestamp: {timestamp}" type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> </formatters> But I get all kind of additional stuff in my log file: Priority: 50 EventId: 50 Severity: Verbose Title:Testing Log Machine: DK-PC-P4740 App Domain: TestAppDomain: 80803a4f-8e18-47bb-901e-cdac784c8041 ProcessId: 6492 Process Name: C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\QTAgent32.exe How do I avoid this additional stuff? Can someone send me a link to an example that only log the message into the log file? /Thanks

    Read the article

  • "Pretty" Continuous Integration for Python

    - by dbr
    This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at.. For example, compared to.. phpUnderControl Hudson CruiseControl.rb ..and others, BuildBot looks rather.. archaic I'm currently playing with Hudson, but it is very Java-centric (although with this guide, I found it easier to setup than BuildBot, and produced more info) Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiney graphs and the likes? Update: After trying a few alternatives, I think I'll stick with Hudson. Integrity was nice and simple, but quite limited. I think Buildbot is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it. Setting Hudson up for a Python project was pretty simple: Download Hudson from https://hudson.dev.java.net/ Run it with java -jar hudson.war Open the web interface on the default address of http://localhost:8080 Go to Manage Hudson, Plugins, click "Update" or similar Install the Git plugin (I had to set the git path in the Hudson global preferences) Create a new project, enter the repository, SCM polling intervals and so on Install nosetests via easy_install if it's not already In the a build step, add nosetests --with-xunit --verbose Check "Publish JUnit test result report" and set "Test report XMLs" to **/nosetests.xml That's all that's required. You can setup email notifications, and the plugins are worth a look. A few I'm currently using for Python projects: SLOCCount plugin to count lines of code (and graph it!) - you need to install sloccount separately Violations to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build) Cobertura can parse the coverage.py output. Nosetest can gather coverage while running your tests, using nosetests --with-coverage (this writes the output to **/coverage.xml)

    Read the article

  • Visual Studio 2010 Can no longer build .NET v3.5

    - by Adam Driscoll
    I have a 2010 project that is targeting .NET v3.5. Inexplicably I can no longer build v3.5 projects. The project doesn't have ANY references added. It won't even let me add a reference to System.Core as it is added by the 'build system'. warning CS1685: The predefined type 'System.Func' is defined in multiple assemblies in the global alias; using definition from 'c:\Windows\Microsoft.NET\Framework\v4.0.30319\mscorlib.dll' IFilter.cs(82,49): error CS0433: The type 'System.Func' exists in both 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll' and 'c:\Windows\Microsoft.NET\Framework\v4.0.30319\mscorlib.dll' Looks like something is grabbing onto 4.0 but I'm not quite sure how to fix it. Any one else run into this? Coworker had this same issue. It took a reinstall of Windows to correct the problem I've opened a bug on this one: https://connect.microsoft.com/VisualStudio/feedback/details/558245/warning-cs1685-when-compiling-a-v3-5-net-application-in-visual-studio-2010 If the compiler is set to verbose I see this: FrameworkPathOverride = C:\Windows\Microsoft.NET\Framework\v4.0.30319 which is defined as: Specifies the location of mscorlib.dll and microsoft.visualbasic.dll. This parameter is equivalent to the /sdkpath switch of the vbc.exe compiler. Some other interesting tidbits: I've created a new project all together and cannot build v3.5 at all. I can build 2.0, 3.0, 3.5 Client Profile, 4.0 and 4.0 Client Profile with no problem. VB.NET can build v3.5 but C# cannot. I've tried a reinstall of .NET 3.5, 4.0 and Visual Studio 2010 with no success. Visual Studio debug logs shown nothing interesting and Safe Mode does not work. Trying to avoid a Windows reinstall...

    Read the article

  • WF 4.0 can't get to resume workflow on the staging/production environment

    - by Yasmine Atta Hajjaj
    I have developed various registeration workflows using WF4.0. Each work flow has various bookmarks. I am using the registeration wf for an asp.net application. I tested the asp.net application locally and it is working fine( Starting WF, Persisting to db and resuming bookmarks). When I try to test it on the staging server, everything goes messy. I can no longer resume wfs and I get an error message : System.Runtime.DurableInstancing.InstancePersistenceCommandException was unhandled by user code Message=The execution of the InstancePersistenceCommand named {urn:schemas-microsoft-com:System.Activities.Persistence/command}LoadWorkflow was interrupted by an error. Source=System.Runtime.DurableInstancing StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Runtime.DurableInstancing.InstancePersistenceContext.OuterExecute(InstanceHandle initialInstanceHandle, InstancePersistenceCommand command, Transaction transaction, TimeSpan timeout) at System.Runtime.DurableInstancing.InstanceStore.Execute(InstanceHandle handle, InstancePersistenceCommand command, TimeSpan timeout) at System.Activities.WorkflowApplication.PersistenceManager.Load(TimeSpan timeout) at System.Activities.WorkflowApplication.LoadCore(TimeSpan timeout, Boolean loadAny) at System.Activities.WorkflowApplication.Load(Guid instanceId, TimeSpan timeout) at System.Activities.WorkflowApplication.Load(Guid instanceId) at CEO_StartUpCEORegisterationTest.LoadInstance(Guid wfInstanceId) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 64 at CEO_StartUpCEORegisterationTest.Page_Load(Object sender, EventArgs e) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 44 at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: System.Data.SqlClient.SqlException Message=Index 'NCIX_KeysTable_SurrogateInstanceId' on table 'KeysTable' (specified in the FROM clause) does not exist. Source=.Net SqlClient Data Provider ErrorCode=-2146232060 Class=16 LineNumber=211 Number=308 Procedure=LoadInstance Server= State=1 StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Activities.DurableInstancing.SqlWorkflowInstanceStoreAsyncResult.SqlCommandAsyncResultCallback(IAsyncResult result) I know that this is quite verbose. But I have been banging my head against the wall for more than a week. I did search and all I came to know was to work on ms dtc. I enabled it on the staging server , I installed application server on the staging server and I am still getting the same error. I would appreciate if anyone could help with the problem. Thanks in advance :)

    Read the article

  • Imagemagick PDF to JPG conversion failing

    - by sbressler
    I'm trying to convert the first page of a PDF to a JPG. I'm pretty sure I got this to work with certain PDFs, but is it really possible that certain PDFs are made incorrectly and cannot be converted? I tried running this first: $ convert 10-03-26.pdf[1] test.jpg And I got the follow: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 convert: Postscript delegate failed `10-03-26.pdf'. Running this instead: $ convert -verbose -colorspace rgb '10-03-26.pdf[1]' test.jpg I get the following: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 "gs" -q -dBATCH -dSAFER -dMaxBitmap=500000000 -dNOPAUSE -dAlignToPixels=0 "-sDEVICE=pnmraw" -dTextAlphaBits=4 -dGraphicsAlphaBits=4 "-g792x1611" "-r72x72" -dFirstPage=2 -dLastPage=2 "-sOutputFile=/tmp/magick-XXU3T44P" "-f/tmp/magick-XXoMKL8Z" "-f/tmp/magic2eec1F"Start of Image Define Huffman Table 0x00 0 1 5 1 1 1 1 1 1 0 0 0 0 0 0 0 Define Huffman Table 0x01 0 3 1 1 1 1 1 1 1 1 1 0 0 0 0 0 Define Huffman Table 0x10 0 2 1 3 3 2 4 3 5 5 4 4 0 0 1 125 Define Huffman Table 0x11 0 2 1 2 4 4 3 4 7 5 4 4 0 1 2 119 End Of Image convert: Postscript delegate failed `10-03-26.pdf'. Why would the conversion fail? Thanks!

    Read the article

  • Asterisk Manager API SIPPeers - Permission Denied

    - by Matt H
    I'm wanting to use the asterisk manager api to show the status of all my SIP lines in a PHP web interface. I thought I'd start simple and use telnet to see it working. So I created a user in /etc/asterisk/manager.conf [portal] secret = password read = all,system,call,log,verbose,command,agent,user Then telnet to localhost on port 5038 This is what I get. asterisk ~ # telnet localhost 5038 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Asterisk Call Manager/1.0 Action: login Username: portal Secret: 8u9sdgk Events: off Response: Success Message: Authentication accepted Action: SIPPeers Response: Error Message: Permission denied Why am I getting permission denied? I thought the user has basically full access? Do I need to restart asterisk to make this work? I didn't restart it. On the other hand, I was able to log in which makes me think that the manager.conf has been reloaded as the portal user didn't exist before. Any ideas?

    Read the article

  • JVM CMS Garbage Collecting Issues

    - by jlintz
    I'm seeing the following symptoms on an application's GC log file with the Concurrent Mark-Sweep collector: 4031.248: [CMS-concurrent-preclean-start] 4031.250: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4031.250: [CMS-concurrent-abortable-preclean-start] CMS: abort preclean due to time 4036.346: [CMS-concurrent-abortable-preclean: 0.159/5.096 secs] [Times: user=0.00 sys=0.01, real=5.09 secs] 4036.346: [GC[YG occupancy: 55964 K (118016 K)]4036.347: [Rescan (parallel) , 0.0641200 secs]4036.411: [weak refs processing, 0.0001300 secs]4036.411: [class unloading, 0.0041590 secs]4036.415: [scrub symbol & string tables, 0.0053220 secs] [1 CMS-remark: 16015K(393216K)] 71979K(511232K), 0.0746640 secs] [Times: user=0.08 sys=0.00, real=0.08 secs] The preclean process keeps aborting continously. I've tried adjusting CMSMaxAbortablePrecleanTime to 15 seconds, from the default of 5, but that has not helped. The current JVM options are as follows... Djava.awt.headless=true -Xms512m -Xmx512m -Xmn128m -XX:MaxPermSize=128m -XX:+HeapDumpOnOutOfMemoryError -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:BiasedLockingStartupDelay=0 -XX:+DoEscapeAnalysis -XX:+UseBiasedLocking -XX:+EliminateLocks -XX:+CMSParallelRemarkEnabled -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC -Xloggc:gc.log -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenPrecleaningEnabled -XX:CMSInitiatingOccupancyFraction=50 -XX:ReservedCodeCacheSize=64m -Dnetworkaddress.cache.ttl=30 -Xss128k It appears the concurrent-abortable-preclean never gets a chance to run. I read through http://blogs.sun.com/jonthecollector/entry/did_you_know which had a suggestion of enabling CMSScavengeBeforeRemark, but the side effects of pausing did not seem ideal. Could anyone offer up any suggestions? Also I was wondering if anyone had a good reference for grokking the CMS GC logs, in particular this line: [1 CMS-remark: 16015K(393216K)] 71979K(511232K), 0.0746640 secs] Not clear on what memory regions those numbers are referring to.

    Read the article

  • Cloning and renaming form elements with jQuery

    - by Micor
    I am looking for an effective way to either clone/rename or re-create address fields to offer ability to submit multiple addresses on the same page. So with form example like this: <div id="addresses"> <div class="address"> <input type="text" name="address[0].street"> <input type="text" name="address[0].city"> <input type="text" name="address[0].zip"> <input type="text" name="address[0].state"> </div> </div> <a href="" id="add_address">Add address form</a> From what I can understand there are two options to do that: Recreate the form field by field and increment the index which is kind of verbose: var index = $(".address").length; $('<`input`>').attr({ name: 'address[' + index + '].street', type: 'text' }).appendTo(...); $('<`input`>').attr({ name: 'address[' + index + '].city', type: 'text' }).appendTo(...); $('<`input`>').attr({ name: 'address[' + index + '].zip', type: 'text' }).appendTo(...); $('<`input`>').attr({ name: 'address[' + index + '].state', type: 'text' }).appendTo(...); Clone Existing layer and replace the name in the clone: $("div.address").clone().appendTo($("#addresses")); Which one do you recommend using in terms of being more efficient and if its #2 can you please suggest how I would go about search and replacing all occurrences of [0] with [1] ([n]). Thank you.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >