Search Results

Search found 54118 results on 2165 pages for 'default value'.

Page 606/2165 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • How to generate Visa checkout token? [on hold]

    - by Muhammad Junaid
    I am on process of creating a Visa checkout plugin but stuck in generating token Here are the token requirment: Format: Alphanumeric; maximum 100 characters in the form of token: x:UNIX_UTC_Timestamp:SHA256_hash, where UNIX_UTC_Timestamp is a UNIX Epoch timestamp SHA256_hash is an SHA256 hash of the following unseparated items: Your shared secret Timestamp from the transaction; exactly the same as UNIX_UTC_Timestamp Resource path (API name). This HTTPS request's query string Note: The query string includes one or more parameters in name-value pair format, whose names are separated from values by equal signs (=); an empty value may be omitted but the name and equal sign must be present. The initial question mark (?) is not included. Note: All parameters must be present. The parameters must be in lexicographic sort order (UTF-8, uppercase hex characters) with parameters separated from each other by an ampersand (&). Note: The query string must be URL encoded (excepting the following characters, per RFC 3986: hyp You can find on Google "visa checkout developer updating 1 px image"

    Read the article

  • Changing Admin Site URL (actually port) - how?

    - by TomTom
    I have a new install of the band new SharePoint 2010. I use host header identified site collections for everything. By default the admin site is on a random port. I would like to move the admin site to port 80, for the server name. As all sites have coded names (for example "intranet", "projects") this would allow administration via the server name - which is easier as external access does not have to remember the port number. How do I do this? I already changed the default URL, but the site (application) is still wrongly mapped. I dont find anything to change the IIS settings in the admin site. I possibly just miss it - so can anyone point me in the right direction?

    Read the article

  • I can't get grub menu to show up during boot

    - by wim
    After trying (and failing) to install better ATI drivers in 11.10, I've somehow lost my grub menu at boot time. The screen does change to the familiar purple colour, but instead of a list of boot options it's just blank solid colour, and then disappears quickly and boots into the default entry normally. How can I get the bootloader back? I've tried sudo update-grub and also various different combinations of resolutions and colour depths in startupmanager application with no success (640x480, 1024x768, 1600x1200, 16 bits, 8 bits, 10 second delay, 7 second delay, 2 second delay...) edit: I have already tried holding down Shift during bootup and it does not seem to change the behaviour. I get the message "GRUB Loading" in the terminal, but then the place where the grub menu normally appears I get a solid blank magenta screen for a while. Here are the contents of /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=" vga=798 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1"

    Read the article

  • Simple HTML5 Friendly Markup Sample

    - by Geertjan
    From a demo done by David Heffelfinger (who has a great Java EE 7 screencast series here), on HTML5 friendly markup. index.xhtml:  <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:jsf="http://xmlns.jcp.org/jsf"> <title>Data Entry Page</title> <body> <form method="POST" jsf:id='form'> <table> <tr> <td>Name:</td> <td><input jsf:id='name' type="text" jsf:value="${person.name}" /></td> </tr> <tr> <td>City</td> <th><input jsf:id='city' type="text" jsf:value="${person.city}"/></th> </tr> <tr> <td><input type="submit" value="Submit" jsf:action="confirmation" /></td> </tr> </table> </form> </body> </html> confirmation.xhtml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Data Confirmation Page</title> </head> <body> <h1>#{person.name}</h1> from <h2>#{person.city}</h2> </body> </html> Person.java: package org.demo; import javax.enterprise.inject.Model; @Model public class Person { String name; String city; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } }

    Read the article

  • Is this iptables NAT exploitable from the external side?

    - by Karma Fusebox
    Could you please have a short look on this simple iptables/NAT-Setup, I believe it has a fairly serious security issue (due to being too simple). On this network there is one internet-connected machine (running Debian Squeeze/2.6.32-5 with iptables 1.4.8) acting as NAT/Gateway for the handful of clients in 192.168/24. The machine has two NICs: eth0: internet-faced eth1: LAN-faced, 192.168.0.1, the default GW for 192.168/24 Routing table is two-NICs-default without manual changes: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 (externalNet) 0.0.0.0 255.255.252.0 U 0 0 0 eth0 0.0.0.0 (externalGW) 0.0.0.0 UG 0 0 0 eth0 The NAT is then enabled only and merely by these actions, there are no more iptables rules: echo 1 > /proc/sys/net/ipv4/ip_forward /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # (all iptables policies are ACCEPT) This does the job, but I miss several things here which I believe could be a security issue: there is no restriction about allowed source interfaces or source networks at all there is no firewalling part such as: (set policies to DROP) /sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT /sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT And thus, the questions of my sleepless nights are: Is this NAT-service available to anyone in the world who sets this machine as his default gateway? I'd say yes it is, because there is nothing indicating that an incoming external connection (via eth0) should be handled any different than an incoming internal connection (via eth1) as long as the output-interface is eth0 - and routing-wise that holds true for both external und internal clients that want to access the internet. So if I am right, anyone could use this machine as open proxy by having his packets NATted here. So please tell me if that's right or why it is not. As a "hotfix" I have added a "-s 192.168.0.0/24" option to the NAT-starting command. I would like to know if not using this option was indeed a security issue or just irrelevant thanks to some mechanism I am not aware of. As the policies are all ACCEPT, there is currently no restriction on forwarding eth1 to eth0 (internal to external). But what are the effective implications of currently NOT having the restriction that only RELATED and ESTABLISHED states are forwarded from eth0 to eth1 (external to internal)? In other words, should I rather change the policies to DROP and apply the two "firewalling" rules I mentioned above or is the lack of them not affecting security? Thanks for clarification!

    Read the article

  • ubuntu 10.04: boot error for custom compiled kernel - gave up wating for root device

    - by atharva
    Hi, I have installed lucid on my Lenevo Laptop (Y 410 series , x86 platoform) and it is working fine. Now I have compiled kernel 2.6.37 from the downloaded from the kernel tree. I followed usual procedure of compileing kernel (make menuconfig,make. make modules etc). Then I created the initrd image using mkinitramfs and updated my grub using upadate grub command. Update-grub detects the initrd image of the compiled kernel. However when I boot from this kernel it gives me following error: Gave up waiting for root device. Common problems: -Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?) -Check root= (did the system wait for the right device?) -Missing modules (cat /proc/modules; ls /dev) ALERT! root=UUID=/... does not exist and then it falls onto initramfs prompt. I have tried following solutions discussed in different ubuntu forums: 1. disable uuid and point root=/dev/sda8 (sda8 is where my kernele image resides (both default kernel and compiled one) from /etc/default/grub 2. compile kernel using CONFIG_DEVTMPFS=y suggested here Still I am unable to boot from the compile kernel. Could someone please suggest me the solution ?

    Read the article

  • Unidentified network: How to configure TCP/IPv4 for Win7?

    - by Zolomon
    When I try to connect to internet I keep getting the error "Unidentified network". I've tried numerous attempts at restoring access without success. IP release, flushing DNS cache, reinstalling NIC, reactivating NIC, resetting router and so on... I've read several times that it's my default gateway that's wrong. Currently I've had automatic IP/DNS configuration set without any problems, and then it stopped working for some reason. Anyone know how I specify the IP? My subnetmask is 255.255.255.0, default gateway is 192.168.0.1 but I have no idea how to determine what IP I should set. I use a D-Link DIR-655 and other computers on the network have IPs like 192.168.0.194, next is 192.168.0.197. (I'm completely lost and am trying to cool down after two weekends of debugging filled with despair.)

    Read the article

  • Over-scan Issues when using HDTV through VGA

    - by RPG Master
    Right now all we can do is set the TV to 1280x768 instead of its native resolution of 1360x768. Setting it to its native resolution gives you a screen with a large portion of the left side of the screen cut off. We've tried everything with the TV so now we're turning to the innards of Ubuntu in hopes of fixing this. The computer is using an NVIDIA GeForce GT240. This is its current xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildd@palmer) Fri Apr 9 10:35:18 UTC 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin # HorizSync 28.0 - 55.0 # VertRefresh 43.0 - 72.0 Identifier "Monitor0" VendorName "Unknown" ModelName "CRT-0" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 6600" EndSection Section "Screen" # Removed Option "metamodes" "1360x768 +0+0; 800x600 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "1360x768 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Unable to access internet if wireless enabled

    - by balki
    The following is my route output. eth0 is my wired network and eth1 is my wireless network. Only wired one has access to internet. If I enable wireless, I am not able to access internet, it tries to access via eth1 and I get 404 page of the wireless router. Why does eth1 have higher preference though default is eth0 (link)? [balakrishnan@mylap ~]$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.26.0.1 0.0.0.0 UG 0 0 0 eth0 10.26.0.0 * 255.255.192.0 U 1 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 192.168.1.0 * 255.255.255.0 U 9 0 0 eth1

    Read the article

  • Why is it good to have website content files on a separate drive other than system (OS) drive?

    - by Jeffrey
    I am wondering what benefits will give me to move all website content files from the default inetpub directory (C:) to something like D:\wwwroot. By default IIS creates separate application pool for each website and I am using the built-in user and group (IURS) as the authentication method. I’ve made sure each site directory has the appropriate permission settings so I am not sure what benefits I will gain. Some of the environment settings are as below: VMWare Windows 2008 R2 64 IIS 7.5 C:\inetpub\site1 C:\inetpub\site2 Also as this article (moving the iis7 inetpub directory to a different drive) points out, not sure if it's worth the trouble to migrate files to a different drive: PLEASE BE AWARE OF THE FOLLOWING: WINDOWS SERVICING EVENTS (I.E. HOTFIXES AND SERVICE PACKS) WOULD STILL REPLACE FILES IN THE ORIGINAL DIRECTORIES. THE LIKELIHOOD THAT FILES IN THE INETPUB DIRECTORIES HAVE TO BE REPLACED BY SERVICING IS LOW BUT FOR THIS REASON DELETING THE ORIGINAL DIRECTORIES IS NOT POSSIBLE.

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • How to display password policy information for a user (Ubuntu)?

    - by C.W.Holeman II
    Ubuntu Documentation Ubuntu 9.04 Ubuntu Server Guide Security User Management states that there is a default minimum password length for Ubuntu: By default, Ubuntu requires a minimum password length of 4 characters Is there a command for displaying the current password policies for a user (such as the chage command displays the password expiration information for a specific user)? > sudo chage -l SomeUserName Last password change : May 13, 2010 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7 This is rather than examining various places that control the policy and interpreting them since this process could contain errors. A command that reports the composed policy would be used to check the policy setting steps.

    Read the article

  • How to add a service to the S runlevel in Debian?

    - by MasterM
    I have the following script (what it does exactly is not important): #!/bin/sh -e ### BEGIN INIT INFO # Provides: watchdog_early # Required-Start: udev # Required-Stop: # Default-Start: S # Default-Stop: # X-Interactive: true # Short-Description: Start watchdog early. ### END INIT INFO # Do stuff here... I insert it into the S runlevel by invoking: insserv watchdog_early The aproriate link is created in /etc/rcS.d: S04watchdog_early -> ../init.d/watchdog_early and /etc/init.d/watchdog_early is executable (has mode 755). Despite all this, it is NOT being run at boot. Why?

    Read the article

  • FeedValidator & Feedburner get 404 when accessing wordpress RSS feeds when permalinks are enabled.

    - by Wazbaur
    I'm helping a friend set up a self-hosted Wordpress blog + feedburner and I'm seeing a problem with the feeds that I'm finding somewhat mysterious. Using the default permalink structure (e.g., ?p=123) everything works as expected; I can follow the feed in Google reader, navigate to it manually, and set it up in feedburner. However, once I switch away from the default permalink structure, feedburner and feedvalidator both report that accessing the feed is returning HTTP-404 and Google reader no longer shows new posts (I'm assuming for the same reason), but I can navigate to the feed using a browser. When I do that it appears as though nothing is wrong; there is a feed there and it contains all the posts I expect it to have. I've re-started the feedburner & reader set-up from the beginning after changing the link structure, so I don't think they're doing anything silly like looking at the feed at its old address. I've seen people with similar problems in various other places but there doesn't seem to be a good answer anywhere.

    Read the article

  • LSB Script: how do i know if something goes wrong?

    - by ianaz
    How do I know if a LSB script fails to load or where do I check the log of the lsbs scripts? I added two scripts with the following command: update-rc.d scriptname defaults And just one launches the things I need. It does not seem to be a script error since if I launch it with /etc/init.d/scriptname it works. This is my script: #!/bin/bash ### BEGIN INIT INFO # Provides: nodes # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts all node apps # Description: Starts all node apps like AAM, AMT,... ### END INIT INFO echo "Launch Node applications with forever" export PATH=/usr/local/bin:$PATH # Starts the redis server redis-server # Starts AAM forever -o /var/log/AAM.log -e /var/log/AAM.log --spinSleepTime 2000 -m 5 start /var/nodejs/AAM/app.js

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Is there application which is fakes browser and allows to choose what real to use if url provided

    - by Dzmitry Lahoda
    Is there any Application for Windows to do next think: I click url in Skype or html file in Explorer. Application is default "fake" browser, i.e. registered as default browser. Application shows several buttons. Each button represents installed or running browser. I can choose real browser, click it and specific url opened in chosen real browser . Quick search not revealed such Application. Context: I work in environment where some sites work in specific browsers. I get clickable urls from different applications. Sometimes I want to launch specific browser to use specific addin of it against url provided. I have specific portable "secured" browser I want to launch only for trusted sites.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Make mod_wsgi use python2.7.2 instead of python2.6?

    - by guron
    i am running Ubuntu 10.04.1 LTS and it came pre-packed with python2.6 but i need to replace it with python2.7.2. (The reason is simple, 2.7 has a lot of features backported from 3 ) i had installed python2.7.2 using ./configure make make altinstall the altinstall option installed it, without touching the system default version, to /usr/local/lib/python2.7 and placed the interpreter in /usr/local/bin/python2.7 Then to help mod_wsgi find python2.7 i added the following to /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local i start apache and run a test wsgi app BUT i am greeted by python 2.6.5 and not Python2.7 Later i replaced the default python simlink to point to python 2.7 ln -f /usr/local/bin/python2.7 /usr/bin/python Now typing 'python' on the console opens python2.7 but somehow mod_wsgi still picks up python2.6 Next i tried, PATH=/usr/local/bin:$PATH export PATH then do a quick restart apache, but yet again its python2.6 !! Here is my $PATH /usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games contents of /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local <VirtualHost *:80> ServerName wsgitest.local DocumentRoot /home/wwwhost/pydocs/wsgi <Directory /home/wwwhost/pydocs/wsgi> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /home/wwwhost/pydocs/wsgi/app.wsgi </VirtualHost> app.wsgi import sys def application(environ, start_response): status = '200 OK' output = sys.version response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Apache error.log 'import site' failed; use -v for traceback [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23235): Initializing Python. [Sun Jun 19 00:27:21 2011] [notice] Apache/2.2.14 (Ubuntu) mod_wsgi/2.8 Python/2.6.5 configured -- resuming normal operations [Sun Jun 19 00:27:21 2011] [info] Server built: Nov 18 2010 21:20:56 [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23238): Attach interpreter ''. [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23239): Attach interpreter ''. [Sun Jun 19 00:27:31 2011] [info] mod_wsgi (pid=23238): Create interpreter 'wsgitest.local|'. [Sun Jun 19 00:27:31 2011] [info] [client 192.168.1.205] mod_wsgi (pid=23238, process='', application='wsgitest.local|'): Loading WSGI script '/home/wwwhost/pydocs/$ [Sun Jun 19 00:27:50 2011] [info] mod_wsgi (pid=23239): Create interpreter 'wsgitest.local|'. Has anybody ever managed to make mod_wsgi run on a non-system default version of python ?

    Read the article

  • Authenticate with Django 1.5

    - by gorjuce
    I'm currently testing django 1.5 and a custom User model, but I've some problems. I've created a User class in my account app, which looks like: class User(AbstractBaseUser): email = models.EmailField() activation_key = models.CharField(max_length=255) is_active = models.BooleanField(default=False) is_admin = models.BooleanField(default=False) USERNAME_FIELD = 'email' I can correctly register a user, who is stored in my account_user table. Now, how can I log in? I've tried with: def login(request): form = AuthenticationForm() if request.method == 'POST': form = AuthenticationForm(request.POST) email = request.POST['username'] password = request.POST['password'] user = authenticate(username=email, password=password) if user is not None: if user.is_active: login(user) else: message = 'disabled account, check validation email' return render( request, 'account-login-failed.html', {'message': message} ) return render(request, 'account-login.html', {'form': form}) I can correctly register a new User My forms.py which contains my register form class RegisterForm(forms.ModelForm): """ a form to create user""" password = forms.CharField( label="Password", widget=forms.PasswordInput() ) password_confirm = forms.CharField( label="Password Repeat", widget=forms.PasswordInput() ) class Meta: model = User exclude = ('last_login', 'activation_key') def clean_password_confirm(self): password = self.cleaned_data.get("password") password_confirm = self.cleaned_data.get("password_confirm") if password and password_confirm and password != password_confirm: raise forms.ValidationError("Password don't math") return password_confirm def clean_email(self): if User.objects.filter(email__iexact=self.cleaned_data.get("email")): raise forms.ValidationError("email already exists") return self.cleaned_data['email'] def save(self): user = super(RegisterForm, self).save(commit=False) user.password = self.cleaned_data['password'] user.activation_key = generate_sha1(user.email) user.save() return user My question is: Why does authenticate give me None? I know I'm trying to authenticate() with an email as username but is that not one of the reasons to use a custom User model?

    Read the article

  • Cocos2d: carom like game

    - by Raj
    Now I am working on carrom like game using cocos2d+Box2d. I set world gravity(0,0), want gravity in z - axis. I set following value for coin striker body: Coin body: density = 20.0f; friction = 0.4f; restitution = 0.6f; Shape Circle with radius - 15/PTM_RATIO Striker body: density = 25.0f; friction = 0.6f; restitution = 0.3f; Shape Circle with radius - 15/PTM_RATIO Output is not smooth, when I apply ApplyLinearImpulse(force,position); Coin movement looks like floating in air....takes too much time to stop... Which value of coin and striker makes it look like real carom?

    Read the article

  • Inaccurate bandwidth limiting in altq queues

    - by overkordbaever
    I'm setting up an environment where I have one Linux server, one OpenBSD router and one Linux client and I want to be able to limit how much bandwidth the client should be able to use. I've been performing these tests with "netcat" and "time" (using time to measure the time of the transfer with netcat), and what happens when trying these tests (using the TCP protocol, the queues will for some reason not work with UDP) is that the queues aren't exact at all. For example: when setting a bandwidth limit of 10mbit, the client cannot use more than five mbits, when setting a limit of 100mbit, the client cannot use more than around 50mbit. The config looks like (using a 100mbit limit in the example): #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq(default) #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low

    Read the article

  • No MAU required on a T4

    - by jsavit
    Cryptic background One of the powerful features of the T-series servers is its hardware crypto acceleration, which dramatically speeds up the compute intensive algorithms needed to encrypt and decrypt data. Previously, administrators setting up logical domains on older T-series servers had to explicitly assign crypto resources (called "MAU" for historical reasons from the T1 chip that had "modular arithmetic units") to domains that had a significant crypto workload (say, an SSL based web server). This could be an administrative burden, as you had to choose which domains got the crypto units, and issue the appropriate ldm set-mau N mydomain commands. The T4 changes things The T4 is fast. Really fast. Its clock rate and out-of-order (OOO) execution that provides the single-thread performance that T-series machines previously did not have. If you have any preconceptions about T-series performance, or SPARC in general, based on the older servers (which, it must be said, were absolutely outstanding for multi-threaded applications), those assumptions are now obsolete. The T4 provides outstanding. performance for all kinds of workload, as illustrated at https://blogs.oracle.com/bestperf. While we all focused on this (did I mention the T4 is fast?), another feature of the T4 went largely unnoticed: The T4 servers have crypto acceleration "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". This is way way better since you have crypto everywhere by default without having to manage it like a discrete and limited resource. It's a feature of the processor, like doing an integer add. With T4, there is no management necessary, you just have HW crypto everywhere all the time seamlessly. This change hasn't been widely advertised, and some administrators have wondered why there were unable to assign a MAU to a domain as they did with T2 and T3 machines. The answer is that there is no longer any separate MAU, so you don't have to take any action at all - just leave the default of 0. Summary Besides being much faster than its predecessors, the T4 also integrates hardware crypto acceleration so its seamlessly available to applications, whether domains are being used or not. Administrators no longer have to control how they are allocated - it "just happens"

    Read the article

  • WordPress mod_rewrite redirect specific folders

    - by Ps Cjef
    As a new user, I'm not allowed to post more than two hyperlinks here. So I have added a space after every http (ignore them and read as full URLs). System: Debian Etch, Apache 2.2 I have a WordPress instance with multiple blogs. I would like to redirect some of the folders based on the year and month, while leaving other folders go to the actual locations. Example: I have archives for a few years, like 2010, 2011 and 2012: http ://mydomain.com/wordpress/myblog/2010/02 http ://mydomain.com/wordpress/myblog/2011/01 http ://mydomain.com/wordpress/myblog/2012/01 I would like to redirect all 2010 and 2011 posts to another blog with the same folder structure: http ://mydomain.com/wordpress/myotherblog/2010/02 http ://mydomain.com/wordpress/myotherblog/2011/01 and so on. I would like to have 2012 and beyond to go to the actual site (http ://mydomain.com/wordpress/myblog/2012/01). I tried mod_rewrite with the following, one rule at a time to test redirection for just one year (and to expand later for other years), and none of them worked! * RewriteEngine is already on since there are some default WordPress rewrites. * RewriteBase is set to http://mydomain.com/wordpress/ . * I put my rule before all the other default WordPress rules are processed. Didn't work solution #1 RedirectMatch 301 /myblog/2010/(.*) /myotherblog/2010/$1 Didn't work solution #2 RewriteRule /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 [R=301] Didn't work solution #3 RedirectPermanent /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 I've also tried the above rules with and without a fully qualified URL for the new location. The rewrite log, with log level set to 9, did not provide any useful information. It shows that it looks at the pattern specified against the URL (as mentioned in the rule), but finally what happens is a passthrough to http ://mydomain.com/myblog/ for all URLs or a 500 Internal Server Error. Any ideas on where I could be going wrong or any alternative solutions?

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >