Search Results

Search found 19408 results on 777 pages for 'output formats'.

Page 116/777 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • Windows in StreamInsight: Hopping vs. Snapshot

    - by Roman Schindlauer
    Three weeks ago, we explained the basic concept of windows in StreamInsight: defining sets of events that serve as arguments for set-based operations, like aggregations. Today, we want to discuss the so-called Hopping Windows and compare them with Snapshot Windows. We will compare these two, because they can serve similar purposes with different behaviors; we will discuss the remaining window type, Count Windows, another time. Hopping (and its syntactic-sugar-sister Tumbling) windows are probably the most straightforward windowing concept in StreamInsight. A hopping window is defined by its length, and the offset from one window to the next. They are aligned with some absolute point on the timeline (which can also be given as a parameter to the window) and create sets of events. The diagram below shows an example of a hopping window with length of 1h and hop size (the offset) of 15 minutes, hence creating overlapping windows:   Two aspects in this diagram are important: Since this window is overlapping, an event can fall into more than one windows. If an (interval) event spans a window boundary, its lifetime will be clipped to the window, before it is passed to the set-based operation. That’s the default and currently only available window input policy. (This should only concern you if you are using a time-sensitive user-defined aggregate or operator.) The set-based operation will be applied to each of these sets, yielding a result. This result is: A single scalar value in case of built-in or user-defined aggregates. A subset of the input payloads, in case of the TopK operator. Arbitrary events, when using a user-defined operator. The timestamps of the result are almost always the ones of the windows. Only the user-defined  operator can create new events with timestamps. (However, even these event lifetimes are subject to the window’s output policy, which is currently always to clip to the window end.) Let’s assume we were calculating the sum over some payload field: var result = from window in source.HoppingWindow( TimeSpan.FromHours(1), TimeSpan.FromMinutes(15), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { avg = window.Avg(e => e.Value) }; Now each window is reflected by one result event:   As you can see, the window definition defines the output frequency. No matter how many or few events we got from the input, this hopping window will produce one result every 15 minutes – except for those windows that do not contain any events at all, because StreamInsight window operations are empty-preserving (more about that another time). The “forced” output for every window can become a performance issue if you have a real-time query with many events in a wide group & apply – let me explain: imagine you have a lot of events that you group by and then aggregate within each group – classical streaming pattern. The hopping window produces a result in each group at exactly the same point in time for all groups, since the window boundaries are aligned with the timeline, not with the event timestamps. This means that the query output will become very bursty, delivering the results of all the groups at the same point in time. This becomes especially obvious if the events are long-lasting, spanning multiple windows each, so that the produced result events do not change their value very often. In such a case, a snapshot window can remedy. Snapshot windows are more difficult to explain than hopping windows: they represent those periods in time, when no event changes occur. In other words, if you mark all event start and and times on your timeline, then you are looking at all snapshot window boundaries:   If your events are never overlapping, the snapshot window will not make much sense. It is commonly used together with timestamp modification, which make it a very powerful tool. Or as Allan Mitchell expressed in in a recent tweet: “I used to look at SnapshotWindow() with disdain. Now she is my mistress, the one I turn to in times of trouble and need”. Let’s look at a simple example: I want to compute the average of some value in my events over the last minute. I don’t want this output be produced at fixed intervals, but at soon as it changes (that’s the true event-driven spirit!). The snapshot window will include all currently active event at each point in time, hence we need to extend our original events’ lifetimes into the future: Applying the Snapshot window on these events, it will appear to be “looking back into the past”: If you look at the result produced in this diagram, you can easily prove that, at each point in time, the current event value represents the average of all original input event within the last minute. Here is the LINQ representation of that query, applying the lifetime extension before the snapshot window: var result = from window in source .AlterEventDuration(e => TimeSpan.FromMinutes(1)) .SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { avg = window.Avg(e => e.Value) }; With more complex modifications of the event lifetimes you can achieve many more query patterns. For instance “running totals” by keeping the event start times, but snapping their end times to some fixed time, like the end of the day. Each snapshot then “sees” all events that have happened in the respective time period so far. Regards, The StreamInsight Team

    Read the article

  • EM12c: Using the LIST verb in emcli

    - by SubinDaniVarughese
    Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager 12.1.0.3.0. The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent! 1. Multiple resources under a single verb:A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.Here is an example which retrieves the list of administrators within EM.Standard mode$ emcli list -resource="Administrators" Interactive modeemcli>list(resource="Administrators")The output will be the same as standard mode.Standard mode$ emcli @myAdmin.pyEnter password :  ******The output will be the same as standard mode.Contents of myAdmin.py scriptlogin()print list(resource="Administrators",jsonout=False).out()To get a list of all available resources use$ emcli list -helpWith every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks. 2. Consistent Formatting:It is possible to format the output of any resource consistently using these options:  –column  This option is used to specify which columns should be shown in the output. Here is an example which shows the list of administrators and their account status$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS" To get a list of columns in a resource use:$ emcli list -resource="Administrators" -help You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30. $ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.  –colsize  This option is used to resize column widths.Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30" The existing standard EMCLI formatting options are also available in list verb. They are: -format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -scriptThere are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.3. Search:Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:           =           !=           >           <           >=           <=           like           is (Must be followed by null or not null)Here is an example which searches for all EM administrators in the marketing department located in the USA.$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'" Here is another example which shows all the named credentials created since a specific date.  $emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.Here is an example$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15" You can provide multiple bind variables. To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:$ emcli list -resource="PreferredCredentialsDefault" -help 4. Secure accessWhen list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access. For example consider this usecase:AdminA has access only to TargetA. AdminA logs into EM CLIExecuting the list verb to get the list of all targets will only show TargetA.5. User defined SQLUsing the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema. To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.Here’s an example  $ emcli list -sql='select * from mgmt$target' 6. JSON output support    JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status. Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.  1 from emcli import *  2  search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE= \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']  3 if len(sys.argv) == 2:  4    print login(username=sys.argv[0])  5    l_prop_val_to_set = sys.argv[1]  6      l_targets = list(resource="TargetProperties", search=search_list,   columns="TARGET_NAME,TARGET_TYPE,PROPERTY_NAME")  7    for target in l_targets.out()['data']:  8       t_pn = 'LifeCycle Status'  9      print "INFO: Setting Property name " + t_pn + " to value " +       l_prop_val_to_set + " for " + target['TARGET_NAME']  10      print  set_target_property_value(property_records=      target['TARGET_NAME']+":"+target['TARGET_TYPE']+":"+      t_pn+":"+l_prop_val_to_set)  11  else:  12   print "\n ERROR: Property value argument is missing"  13   print "\n INFO: Format to run this file is filename.py <username>   <Database Target LifeCycle Status Property Value>" You can download the script from here. I could not upload the file with .py extension so you need to rename the file to myScript.py before executing it using emcli.A line by line explanation for beginners: Line  1 Imports the emcli verbs as functions  2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.  3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.  4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.  5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.  6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.  7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.  8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.  9 This a message informing the user the script is setting the property value for dbxyz.  10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets. The script is executed as:$ emcli @myScript.py subin Production The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.For your quick reference, the resources available in Enterprise Manager 12.1.0.4.0 with list verb are:$ emcli list -helpWatch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter mt=8">Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Multiple Audio Issues

    - by Lerp
    I am having issues with my audio on Ubuntu 12.04, I will try and give as much detail as possible so sorry if there's too much detail. The Problem Audio plays from both speakers and headphone regardless of what connector I choose and regardless of the profile I use. The microphone is constantly being played through headphones & speakers. The headphone audio is extremely quiet but plays from both ears when I select "Headphones" for the connector in Sound Settings. The headphone audio only plays from one ear and is quiet (but not as quiet as above) when I select "Analogue Output" for the connector in Sound Settings. I can only select "Headphones" as the connector in Sound Settings if I set the profile to either "Analogue Stereo Output/Duplex", all others only allow me to choose "Analogue Output" for the connector. Despite the headphone sound issues, the speaker sound is fine apart from the fact that I am not able to select which output is used, they just both play. My headphone and microphone are plugged into the front and my speakers are plugged into the back. What I have tried I have put everything in alsamixer to 100 apart from "Front Mic Boost" which I have set to 0. Command Output aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: AD198x Digital [AD198x Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 2: AD198x Headphone [AD198x Headphone] Subdevices: 1/1 Subdevice #0: subdevice #0 arecord -l **** List of CAPTURE Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 2/3 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 cat /proc/asound/cards 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xf7ff8000 irq 70 cat /proc/asound/modules 0 snd_hda_intel cat /proc/asound/card*/codec* | grep "Codec" Codec: Analog Devices AD1989B cat /etc/modprobe.d/alsa-base.conf # autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 # Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-ioctl32 ; /sbin/modprobe --quiet --use-blacklist snd-seq ; } # # Workaround at bug #499695 (reverted in Ubuntu see LP #319505) install snd-pcm /sbin/modprobe --ignore-install snd-pcm $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-pcm-oss ; : ; } install snd-mixer /sbin/modprobe --ignore-install snd-mixer $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-mixer-oss ; : ; } install snd-seq /sbin/modprobe --ignore-install snd-seq $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; /sbin/modprobe --quiet --use-blacklist snd-seq-oss ; : ; } # install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; : ; } # Cause optional modules to be loaded above sound card driver modules install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-emu10k1-synth ; } install snd-via82xx /sbin/modprobe --ignore-install snd-via82xx $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq ; } # Load saa7134-alsa instead of saa7134 (which gets dragged in by it anyway) install saa7134 /sbin/modprobe --ignore-install saa7134 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist saa7134-alsa ; : ; } # Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options saa7134-alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 options snd-usb-audio index=-2 options snd-usb-caiaq index=-2 options snd-usb-ua101 index=-2 options snd-usb-us122l index=-2 options snd-usb-usx2y index=-2 # Ubuntu #62691, enable MPU for snd-cmipci options snd-cmipci mpu_port=0x330 fm_port=0x388 # Keep snd-pcsp from being loaded as first soundcard options snd-pcsp index=-2 # Keep snd-usb-audio from beeing loaded as first soundcard options snd-usb-audio index=-2 Hopefully I have provided enough information, I will happily provide anymore information needed. Thank you. Update Reinstalling alsa-base and pulseaudio fixed the headphone issues I was having.

    Read the article

  • Defining scope for Record Count functoid:

    - by ArunManick
    Defining scope for Record Count functoid: Problem: One of the most common scenarios in BizTalk is calculating the record count of repeating structure. BizTalk has come up with an advanced functoid called Record Count functoid which will give the record count for the repeating structure however you cannot define the scope for a Record Count functoid. Because Record Count functoid accepts exactly one parameter which can be repeating record or field element.   If somebody don’t know what “scope” means I will explain with a simple example. Consider that we have a source schema having a structure Country -> State -> City. Country will have various states and each state will have different cities. Now you want to calculate no. of cities present in each state. Here scope is defined at the parent node “State”. Traditional Record Count functoid will give the total no. of cities present in the source message and not the State level city count.   Source Schema:   Destination Schema:   Soultion #1: As the title indicates we are not going to add one more parameter to the record count functoid. Instead of that, we are going to achieve the solution with the help of Scripting functoid with Inline XSLT script. XSLT is basically the transformation language used in the mapping.     “No.OfCities” indicates the destination field name to which we are going to send the value. In count(City), “count” refers to built in XPath function used in XSLT and “City” refers to source schema record name. Here you can find the list of built-in functions available in XSLT.   The mapping will look like as follows:   The 2 Record Count functoids used in this map will give the total number of states and total number of cities as that of input message.   Soultion #2:  If someone doesn’t like XSLT code and they wish to achieve the solution using functoids alone, then here is another solution.   Use logical Existence functoid to check whether “City” exist or not Connect the output of Logical Existence functoid to the Value Mapping functoid with second parameter as constant “1”. Hence if the first parameter is TRUE it will give the output as “1”. Connect the output of Value Mapping functoid to the Cumulative Sum functoid with scope as “1”   This will calculate the City count at the state level. The mapping will look like as follows:     Let us see the sample input and the map output.   Input: <?xml version="1.0" encoding="utf-8"?> <ns0:Country xmlns:ns0="http://RecordCount.Source">   <State>     <StateName>Tamilnadu</StateName>     <City>       <CityName>Pollachi</CityName>     </City>     <City>       <CityName>Coimbatore</CityName>     </City>     <City>       <CityName>Chennai</CityName>     </City>   </State>   <State>     <StateName>Kerala</StateName>     <City>       <CityName>Palakad</CityName>     </City>   </State>   <State>     <StateName>Karnataka</StateName>     <City>       <CityName>Bangalore</CityName>     </City>     <City>       <CityName>Mangalore</CityName>     </City>   </State> </ns0:Country>     Output: <ns0:Country xmlns:ns0="http://RecordCount.Destination">           <No.OfStates>3</No.OfStates>           <No.OfCities>6</No.OfCities>           <States>                    <No.OfCities>3</No.OfCities>           </States>           <States>                    <No.OfCities>1</No.OfCities>           </States>           <States>                    <No.OfCities>2</No.OfCities>           </States> </ns0:Country>   Conclusion: This is my first post and I hope you enjoyed it.   -Arun

    Read the article

  • Speed up executable program Linux. Bit Toggling

    - by AK_47
    I have a ZyBo circuit board which has a ArmV7 processor. I wrote a C program to output a clock and a corresponding data sequence on a PMOD. The PMOD has a switching speed of up to 50MHz. However, my program's created clock only has a max frequency of 115 Hz. I need this program to output as fast as possible because the PMOD I'm using is capable of 50MHz. I compiled my program with the following code line: gcc -ofast (c_program) Here is some sample code: #include <stdio.h> #include <stdlib.h> #define ARRAYSIZE 511 //________________________________________ //macro for the SIGNAL PMOD //________________________________________ //DATA //ZYBO Use Pin JE1 #define INIT_SIGNAL system("echo 54 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio54/direction"); #define SIGNAL_ON system("echo 1 > /sys/class/gpio/gpio54/value"); #define SIGNAL_OFF system("echo 0 > /sys/class/gpio/gpio54/value"); //________________________________________ //macro for the "CLOCK" PMOD //________________________________________ //CLOCK //ZYBO Use Pin JE4 #define INIT_MYCLOCK system("echo 57 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio57/direction"); #define MYCLOCK_ON system("echo 1 > /sys/class/gpio/gpio57/value"); #define MYCLOCK_OFF system("echo 0 > /sys/class/gpio/gpio57/value"); int main(void){ int myarray[ARRAYSIZE] = {//hard coded array for signal data 1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,0,0,0,1,0,0,1,1,1,0,0,1,1,1,0,1,1,1,1,0,0,1,0,0,0,1,0,1,0,0,1,1,1,0,0,1,0,1,0,1,0,0,1,0,1,1,0,1,0,1,1,0,0,1,1,1,1,0,0,1,0,1,0,0,1,1,1,1,1,1,0,0,1,0,0,1,1,0,1,0,0,0,0,1,0,0,0,1,1,0,0,1,0,1,1,1,0,0,0,1,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,1,1,1,0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,0,1,0,0,0,1,0,0,0,1,0,1,0,1,0,1,0,1,1,0,0,0,0,0,0,0,0,1,0,1,1,0,1,1,1,1,1,0,0,1,1,1,0,0,1,1,0,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,1,0,1,0,1,0,1,1,0,1,0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,0,0,0,0,1,1,1,0,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,1,1,1,0,1,1,1,0,0,1,1,1,0,1,0,0,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,0,1,1,0,0,1,0,1,1,0,1,0,1,1,1,0,0,0,0,0,1,0,0,0,1,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,0,1,1,1,1,1,1,1,1,0,1,1,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,0,1,1,1,0,0,0,1,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,1,0,0,1,0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,1,1,1,1,0,0,1,1,0,1,1,0,0,1,1,0,1,1,0,1,0,1,0,1,0,1,0,0,1,1,1,0,1,1,0,0,0,0,1,1,0,1,1,0,1,1,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0 }; INIT_SIGNAL INIT_MYCLOCK; //infinite loop int i; do{ i = 0; do{ /* 1020 is chosen because it is twice the size needed allowing for the changes in the clock. (511= 0-510, 510*2= 1020 ==> 0-1020 needed, so 1021 it is) */ if((i%2)==0) { MYCLOCK_ON; if(myarray[i/2] == 1){ SIGNAL_ON; }else{ SIGNAL_OFF; } } else if((i%2)==1) { MYCLOCK_OFF; //dont need to change the signal since it will just stay at whatever it was. } ++i; } while(i < 1021); } while(1); return 0; } I'm using the 'system' call to tell the system to output 1 volt or 0 volts onto a pin on the board (to represent the data signal and clock signal. One pin for the data and another for the clock). That was the only way I knew to tell the system to output a voltage. What can I do to make my executable program output to be at least in the magnitude of MegaHertz?

    Read the article

  • Need help with copy constructor for very basic implementation of singly linked lists

    - by Jesus
    Last week, we created a program that manages sets of strings, using classes and vectors. I was able to complete this 100%. This week, we have to replace the vector we used to store strings in our class with simple singly linked lists. The function basically allows users to declare sets of strings that are empty, and sets with only one element. In the main file, there is a vector whose elements are a struct that contain setName and strSet (class). HERE IS MY PROBLEM: It deals with the copy constructor of the class. When I remove/comment out the copy constructor, I can declare as many empty or single sets as I want, and output their values without a problem. But I know I will obviously need the copy constructor for when I implement the rest of the program. When I leave the copy constructor in, I can declare one set, either single or empty, and output its value. But if I declare a 2nd set, and i try to output either of the first two sets, i get a Segmentation Fault. Moreover, if i try to declare more then 2 sets, I get a Segmentation Fault. Any help would be appreciated!! Here is my code for a very basic implementation of everything: Here is the setcalc.cpp: (main file) #include <iostream> #include <cctype> #include <cstring> #include <string> #include "help.h" #include "strset2.h" using namespace std; // Declares of structure to hold all the sets defined struct setsOfStr { string nameOfSet; strSet stringSet; }; // Checks if the set name inputted is unique bool isSetNameUnique( vector<setsOfStr> strSetArr, string setName) { for(unsigned int i = 0; i < strSetArr.size(); i++) { if( strSetArr[i].nameOfSet == setName ) { return false; } } return true; } int main(int argc, char *argv[]) { char commandChoice; // Declares a vector with our declared structure as the type vector<setsOfStr> strSetVec; string setName; string singleEle; // Sets a loop that will constantly ask for a command until 'q' is typed while (1) { // declaring a set to be empty if(commandChoice == 'd') { cin >> setName; // Check that the set name inputted is unique if (isSetNameUnique(strSetVec, setName) == true) { strSet emptyStrSet; setsOfStr set1; set1.nameOfSet = setName; set1.stringSet = emptyStrSet; strSetVec.push_back(set1); } else { cerr << "ERROR: Re-declaration of set '" << setName << "'\n"; } } // declaring a set to be a singleton else if(commandChoice == 's') { cin >> setName; cin >> singleEle; // Check that the set name inputted is unique if (isSetNameUnique(strSetVec, setName) == true) { strSet singleStrSet(singleEle); setsOfStr set2; set2.nameOfSet = setName; set2.stringSet = singleStrSet; strSetVec.push_back(set2); } else { cerr << "ERROR: Re-declaration of set '" << setName << "'\n"; } } // using the output function else if(commandChoice == 'o') { cin >> setName; if(isSetNameUnique(strSetVec, setName) == false) { // loop through until the set name is matched and call output on its strSet for(unsigned int k = 0; k < strSetVec.size(); k++) { if( strSetVec[k].nameOfSet == setName ) { (strSetVec[k].stringSet).output(); } } } else { cerr << "ERROR: No such set '" << setName << "'\n"; } } // quitting else if(commandChoice == 'q') { break; } else { cerr << "ERROR: Ignoring bad command: '" << commandChoice << "'\n"; } } return 0; } Here is the strSet2.h: #ifndef _STRSET_ #define _STRSET_ #include <iostream> #include <vector> #include <string> struct node { std::string s1; node * next; }; class strSet { private: node * first; public: strSet (); // Create empty set strSet (std::string s); // Create singleton set strSet (const strSet &copy); // Copy constructor // will implement destructor later void output() const; strSet& operator = (const strSet& rtSide); // Assignment }; // End of strSet class #endif // _STRSET_ And here is the strSet2.cpp (implementation of class) #include <iostream> #include <vector> #include <string> #include "strset2.h" using namespace std; strSet::strSet() { first = NULL; } strSet::strSet(string s) { node *temp; temp = new node; temp->s1 = s; temp->next = NULL; first = temp; } strSet::strSet(const strSet& copy) { cout << "copy-cst\n"; node *n = copy.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } void strSet::output() const { if(first == NULL) { cout << "Empty set\n"; } else { node *temp; temp = first; while(1) { cout << temp->s1 << endl; if(temp->next == NULL) break; temp = temp->next; } } } strSet& strSet::operator = (const strSet& rtSide) { first = rtSide.first; return *this; }

    Read the article

  • "Exception: msg 'axis2:null', not-found" when using a suds client with an axis2 server

    - by konrad
    I am writing a Suds (Python) SOAP client for an Axis2 server I have no control over. Suds chokes on the WSDL file with the following exception: File "site-packages/suds/wsdl.py", line 494, in resolve raise Exception("msg '%s', not-found" % op.input) Exception: msg 'axis2:null', not-found This is the WSDL file (I have replaced the hostnames with localhost). Any clue on how to fix this with the ImportDoctor? <?xml version="1.0" encoding="UTF-8"?> <wsdl:definitions xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:axis2="http://ws.apache.org/axis2" xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" targetNamespace="http://ws.apache.org/axis2"> <wsdl:types/> <wsdl:portType name="__SynapseServicePortType"> <wsdl:operation name="mediate"> <wsdl:input message="axis2:null" wsaw:Action="urn:mediate"/> <wsdl:output message="axis2:null" wsaw:Action="urn:mediateResponse"/> </wsdl:operation> </wsdl:portType> <wsdl:binding name="__SynapseServiceSoap11Binding" type="axis2:__SynapseServicePortType"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <wsdl:operation name="mediate"> <soap:operation soapAction="urn:mediate" style="document"/> <wsdl:input> <soap:body use="literal"/> </wsdl:input> <wsdl:output> <soap:body use="literal"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:binding name="__SynapseServiceSoap12Binding" type="axis2:__SynapseServicePortType"> <soap12:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <wsdl:operation name="mediate"> <soap12:operation soapAction="urn:mediate" style="document"/> <wsdl:input> <soap12:body use="literal"/> </wsdl:input> <wsdl:output> <soap12:body use="literal"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:binding name="__SynapseServiceHttpBinding" type="axis2:__SynapseServicePortType"> <http:binding verb="POST"/> <wsdl:operation name="mediate"> <http:operation location="mediate"/> <wsdl:input> <mime:content type="text/xml" part="mediate"/> </wsdl:input> <wsdl:output> <mime:content type="text/xml" part="mediate"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="__SynapseService"> <wsdl:port name="__SynapseServiceHttpsSoap11Endpoint" binding="axis2:__SynapseServiceSoap11Binding"> <soap:address location="https://localhost:8843/services/__SynapseService.__SynapseServiceHttpsSoap11Endpoint"/> </wsdl:port> <wsdl:port name="__SynapseServiceHttpSoap11Endpoint" binding="axis2:__SynapseServiceSoap11Binding"> <soap:address location="http://localhost:8880/services/__SynapseService.__SynapseServiceHttpSoap11Endpoint"/> </wsdl:port> <wsdl:port name="__SynapseServiceHttpsSoap12Endpoint" binding="axis2:__SynapseServiceSoap12Binding"> <soap12:address location="https://localhost:8843/services/__SynapseService.__SynapseServiceHttpsSoap12Endpoint"/> </wsdl:port> <wsdl:port name="__SynapseServiceHttpSoap12Endpoint" binding="axis2:__SynapseServiceSoap12Binding"> <soap12:address location="http://localhost:8880/services/__SynapseService.__SynapseServiceHttpSoap12Endpoint"/> </wsdl:port> <wsdl:port name="__SynapseServiceHttpsEndpoint" binding="axis2:__SynapseServiceHttpBinding"> <http:address location="https://localhost:8843/services/__SynapseService.__SynapseServiceHttpsEndpoint"/> </wsdl:port> <wsdl:port name="__SynapseServiceHttpEndpoint" binding="axis2:__SynapseServiceHttpBinding"> <http:address location="http://localhost:8880/services/__SynapseService.__SynapseServiceHttpEndpoint"/> </wsdl:port> </wsdl:service> </wsdl:definitions>

    Read the article

  • Techniques for modeling a dynamic dataflow with Java concurrency API

    - by Maian
    Is there an elegant way to model a dynamic dataflow in Java? By dataflow, I mean there are various types of tasks, and these tasks can be "connected" arbitrarily, such that when a task finishes, successor tasks are executed in parallel using the finished tasks output as input, or when multiple tasks finish, their output is aggregated in a successor task (see flow-based programming). By dynamic, I mean that the type and number of successors tasks when a task finishes depends on the output of that finished task, so for example, task A may spawn task B if it has a certain output, but may spawn task C if has a different output. Another way of putting it is that each task (or set of tasks) is responsible for determining what the next tasks are. Sample dataflow for rendering a webpage: I have as task types: file downloader, HTML/CSS renderer, HTML parser/DOM builder, image renderer, JavaScript parser, JavaScript interpreter. File downloader task for HTML file HTML parser/DOM builder task File downloader task for each embedded file/link If image, image renderer If external JavaScript, JavaScript parser JavaScript interpreter Otherwise, just store in some var/field in HTML parser task JavaScript parser for each embedded script JavaScript interpreter Wait for above tasks to finish, then HTML/CSS renderer (obviously not optimal or perfectly correct, but this is simple) I'm not saying the solution needs to be some comprehensive framework (in fact, the closer to the JDK API, the better), and I absolutely don't want something as heavyweight is say Spring Web Flow or some declarative markup or other DSL. To be more specific, I'm trying to think of a good way to model this in Java with Callables, Executors, ExecutorCompletionServices, and perhaps various synchronizer classes (like Semaphore or CountDownLatch). There are a couple use cases and requirements: Don't make any assumptions on what executor(s) the tasks will run on. In fact, to simplify, just assume there's only one executor. It can be a fixed thread pool executor, so a naive implementation can result in deadlocks (e.g. imagine a task that submits another task and then blocks until that subtask is finished, and now imagine several of these tasks using up all the threads). To simplify, assume that the data is not streamed between tasks (task output-succeeding task input) - the finishing task and succeeding task won't exist together, so the input data to the succeeding task will not be changed by the preceeding task (since it's already done). There are only a couple operations that the dataflow "engine" should be able to handle: A mechanism where a task can queue more tasks A mechanism whereby a successor task is not queued until all the required input tasks are finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until the flow is finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until certain tasks have finished Since the dataflow is dynamic (depends on input/state of the task), the activation of these mechanisms should occur within the task code, e.g. the code in a Callable is itself responsible for queueing more Callables. The dataflow "internals" should not be exposed to the tasks (Callables) themselves - only the operations listed above should be available to the task. Note that the type of the data is not necessarily the same for all tasks, e.g. a file download task may accept a File as input but will output a String. If a task throws an uncaught exception (indicating some fatal error requiring all dataflow processing to stop), it must propagate up to the thread that initiated the dataflow as quickly as possible and cancel all tasks (or something fancier like a fatal error handler). Tasks should be launched as soon as possible. This along with the previous requirement should preclude simple Future polling + Thread.sleep(). As a bonus, I would like to dataflow engine itself to perform some action (like logging) every time task is finished or when no has finished in X time since last task has finished. Something like: ExecutorCompletionService<T> ecs; while (hasTasks()) { Future<T> future = ecs.poll(1 minute); some_action_like_logging(); if (future != null) { future.get() ... } ... } Are there straightforward ways to do all this with Java concurrency API? Or if it's going to complex no matter what with what's available in the JDK, is there a lightweight library that satisfies the requirements? I already have a partial solution that fits my particular use case (it cheats in a way, since I'm using two executors, and just so you know, it's not related at all to the web browser example I gave above), but I'd like to see a more general purpose and elegant solution.

    Read the article

  • With XSLT, how can I use this if-test with an array, when search element is returned by a template call inside the for loop?

    - by codesforcoffee
    I think this simple example might ask the question a lot more clearly. I have an input file with multiple products. There are 10 types of product (2 product IDs is fine enough for this example), but the input will have 200 products, and I only want to output the info for the first product of each type. (Output info for the lowest priced one, so the first one will be the lowest price because I sort by Price first.) So I want to read in each product, but only output the product's info if I haven't already output a product with that same ID. I couldn't figure out how to get the processID template to return a value that I need to do my if-check on, that uses parameters from inside the for-each Product loop -then properly close the if tag in the right place so it won't output the open Product tag unless it passes the if test. I know the following code does not work, but it illustrates the idea and gives me a place to start: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" encoding="UTF-8" indent="yes" cdata-section-elements="prod_name adv_notes"/> <xsl:template match="/"> <List> <xsl:for-each select="ProductGroup"> <xsl:sort select="ActiveProducts/Product/Rate"/> <xsl:variable name="IDarray"> <xsl:for-each select="ActiveProducts/Product"> <xsl:variable name="CurrentID"> <xsl:call-template name="processID"> <xsl:with-param name="ProductCode" select="ProductCode" /> </xsl:call-template> </xsl:variable> <xsl:if test="not(contains($IDarray, $CurrentID))"> <child elem="{@elem}"> <xsl:select value-of="$CurrentID" /> </child> <Product> <xsl:attribute name="ID"> <xsl:select value-of="$CurrentID" /> </xsl:attribute> <prod_name> <xsl:value-of select="../ProductName"/> </prod_name> <rate> <xsl:value-of select="../Rate"/> </rate> </Product> </xsl:if> </xsl:for-each> </xsl:variable> </xsl:for-each> </List> </xsl:template> <xsl:template name="processID"> <xsl:param name="ProductCode"/> <xsl:choose> <xsl:when test="starts-with($ProductCode, '515')">5</xsl:when> <xsl:when test="starts-with($ProductCode, '205')">2</xsl:when> </xsl:choose> </xsl:template> Thanks so much in advance, I know some of the awesome programmers here can help! :) -Holly An input would look like this: <ProductGroup> <ActiveProducts> <Product> <ProductCode> 5155 </ProductCode> <ProductName> House </ProductName> <Rate> 3.99 </Rate> </Product> <Product> <ProductCode> 5158 </ProductCode> <ProductName> House </ProductName> <Rate> 4.99 </Rate> </Product> </ActiveProducts> </ProductGroup> <ProductGroup> <ActiveProducts> <Product> <ProductCode> 2058 </ProductCode> <ProductName> House </ProductName> <Rate> 2.99 </Rate> </Product> <Product> <ProductCode> 2055 </ProductCode> <ProductName> House </ProductName> <Rate> 7.99 </Rate> </Product> </ActiveProducts> </ProductGroup> 200 of those with different attributes. I have the translation working, just needed to add that array and if statement somehow. Output would be this for only that simple input file:

    Read the article

  • Error when compiling FXAA shader

    - by mulletdevil
    I am getting the following error when compiling the FXAA shader downloaded from here http://timothylottes.blogspot.co.uk/2011/07/fxaa-311-released.html Fxaa3_11.h(934,5): error x4000: Use of potentially uninitialized variable (FxaaPixelShader) Here is the line in the shader if(earlyExit) #if (FXAA_DISCARD == 1) FxaaDiscard; #else return rgbyM; #endif Does anyone know what may be causing this? I have not changed any values in that shader. Here is a snippet from my current pixel shader #define FXAA_GREEN_AS_LUMA 1 #define FXAA_HLSL_4 1 #define FXAA_PC 1 #define FXAA_QUALITY__PRESET 12 #include "Fxaa3_11.h" PS_OUTPUT main(PS_INPUT fragment) { PS_OUTPUT output; const float2 pos = fragment.hPosition.xy; const float4 notUsedFloat4 = float4(0.0f, 0.0f, 0.0f, 0.0f); const float fxaaQualitySubpix = 0; const float fxaaQualityEdgeThreshold = 0.333; const float fxaaQualityEdgeThresholdMin = 0.0833; const float notUsedFloat = 0.0f; FxaaTex fxaaTex; fxaaTex.smpl = SampleType; fxaaTex.tex = inputTexture; output.colour = FxaaPixelShader(pos, //1 notUsedFloat4, //2 fxaaTex, //3 fxaaTex, //4 fxaaTex, //5 rcpFrame, //6 notUsedFloat4, //7 rcpFrameOpt, //8 notUsedFloat4, //9 fxaaQualitySubpix, //10 fxaaQualityEdgeThreshold, //11 fxaaQualityEdgeThresholdMin, //12 notUsedFloat, //13 notUsedFloat, //14 notUsedFloat, //15 notUsedFloat //16 ); return output; } Am I passing a wrong value into the shader?

    Read the article

  • ASP.NET MVC ‘Extendable-hooks’ – ControllerActionInvoker class

    - by nmarun
    There’s a class ControllerActionInvoker in ASP.NET MVC. This can be used as one of an hook-points to allow customization of your application. Watching Brad Wilsons’ Advanced MP3 from MVC Conf inspired me to write about this class. What MSDN says: “Represents a class that is responsible for invoking the action methods of a controller.” Well if MSDN says it, I think I can instill a fair amount of confidence into what the class does. But just to get to the details, I also looked into the source code for MVC. Seems like the base class Controller is where an IActionInvoker is initialized: 1: protected virtual IActionInvoker CreateActionInvoker() { 2: return new ControllerActionInvoker(); 3: } In the ControllerActionInvoker (the O-O-B behavior), there are different ‘versions’ of InvokeActionMethod() method that actually call the action method in question and return an instance of type ActionResult. 1: protected virtual ActionResult InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary<string, object> parameters) { 2: object returnValue = actionDescriptor.Execute(controllerContext, parameters); 3: ActionResult result = CreateActionResult(controllerContext, actionDescriptor, returnValue); 4: return result; 5: } I guess that’s enough on the ‘behind-the-screens’ of this class. Let’s see how we can use this class to hook-up extensions. Say I have a requirement that the user should be able to get different renderings of the same output, like html, xml, json, csv and so on. The user will type-in the output format in the url and should the get result accordingly. For example: http://site.com/RenderAs/ – renders the default way (the razor view) http://site.com/RenderAs/xml http://site.com/RenderAs/csv … and so on where RenderAs is my controller. There are many ways of doing this and I’m using a custom ControllerActionInvoker class (even though this might not be the best way to accomplish this). For this, my one and only route in the Global.asax.cs is: 1: routes.MapRoute("RenderAsRoute", "RenderAs/{outputType}", 2: new {controller = "RenderAs", action = "Index", outputType = ""}); Here the controller name is ‘RenderAsController’ and the action that’ll get called (always) is the Index action. The outputType parameter will map to the type of output requested by the user (xml, csv…). I intend to display a list of food items for this example. 1: public class Item 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public Cuisine Cuisine { get; set; } 6: } 7:  8: public class Cuisine 9: { 10: public int CuisineId { get; set; } 11: public string Name { get; set; } 12: } Coming to my ‘RenderAsController’ class. I generate an IList<Item> to represent my model. 1: private static IList<Item> GetItems() 2: { 3: Cuisine cuisine = new Cuisine { CuisineId = 1, Name = "Italian" }; 4: Item item = new Item { Id = 1, Name = "Lasagna", Cuisine = cuisine }; 5: IList<Item> items = new List<Item> { item }; 6: item = new Item {Id = 2, Name = "Pasta", Cuisine = cuisine}; 7: items.Add(item); 8: //... 9: return items; 10: } My action method looks like 1: public IList<Item> Index(string outputType) 2: { 3: return GetItems(); 4: } There are two things that stand out in this action method. The first and the most obvious one being that the return type is not of type ActionResult (or one of its derivatives). Instead I’m passing the type of the model itself (IList<Item> in this case). We’ll convert this to some type of an ActionResult in our custom controller action invoker class later. The second thing (a little subtle) is that I’m not doing anything with the outputType value that is passed on to this action method. This value will be in the RouteData dictionary and we’ll use this in our custom invoker class as well. It’s time to hook up our invoker class. First, I’ll override the Initialize() method of my RenderAsController class. 1: protected override void Initialize(RequestContext requestContext) 2: { 3: base.Initialize(requestContext); 4: string outputType = string.Empty; 5:  6: // read the outputType from the RouteData dictionary 7: if (requestContext.RouteData.Values["outputType"] != null) 8: { 9: outputType = requestContext.RouteData.Values["outputType"].ToString(); 10: } 11:  12: // my custom invoker class 13: ActionInvoker = new ContentRendererActionInvoker(outputType); 14: } Coming to the main part of the discussion – the ContentRendererActionInvoker class: 1: public class ContentRendererActionInvoker : ControllerActionInvoker 2: { 3: private readonly string _outputType; 4:  5: public ContentRendererActionInvoker(string outputType) 6: { 7: _outputType = outputType.ToLower(); 8: } 9: //... 10: } So the outputType value that was read from the RouteData, which was passed in from the url, is being set here in  a private field. Moving to the crux of this article, I now override the CreateActionResult method. 1: protected override ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) 2: { 3: if (actionReturnValue == null) 4: return new EmptyResult(); 5:  6: ActionResult result = actionReturnValue as ActionResult; 7: if (result != null) 8: return result; 9:  10: // This is where the magic happens 11: // Depending on the value in the _outputType field, 12: // return an appropriate ActionResult 13: switch (_outputType) 14: { 15: case "json": 16: { 17: JavaScriptSerializer serializer = new JavaScriptSerializer(); 18: string json = serializer.Serialize(actionReturnValue); 19: return new ContentResult { Content = json, ContentType = "application/json" }; 20: } 21: case "xml": 22: { 23: XmlSerializer serializer = new XmlSerializer(actionReturnValue.GetType()); 24: using (StringWriter writer = new StringWriter()) 25: { 26: serializer.Serialize(writer, actionReturnValue); 27: return new ContentResult { Content = writer.ToString(), ContentType = "text/xml" }; 28: } 29: } 30: case "csv": 31: controllerContext.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=items.csv"); 32: return new ContentResult 33: { 34: Content = ToCsv(actionReturnValue as IList<Item>), 35: ContentType = "application/ms-excel" 36: }; 37: case "pdf": 38: string filePath = controllerContext.HttpContext.Server.MapPath("~/items.pdf"); 39: controllerContext.HttpContext.Response.AddHeader("content-disposition", 40: "attachment; filename=items.pdf"); 41: ToPdf(actionReturnValue as IList<Item>, filePath); 42: return new FileContentResult(StreamFile(filePath), "application/pdf"); 43:  44: default: 45: controllerContext.Controller.ViewData.Model = actionReturnValue; 46: return new ViewResult 47: { 48: TempData = controllerContext.Controller.TempData, 49: ViewData = controllerContext.Controller.ViewData 50: }; 51: } 52: } A big method there! The hook I was talking about kinda above actually is here. This is where different kinds / formats of output get returned based on the output type requested in the url. When the _outputType is not set (string.Empty as set in the Global.asax.cs file), the razor view gets rendered (lines 45-50). This is the default behavior in most MVC applications where-in a view (webform/razor) gets rendered on the browser. As you see here, this gets returned as a ViewResult. But then, for an outputType of json/xml/csv, a ContentResult gets returned, while for pdf, a FileContentResult is returned. Here are how the different kinds of output look like: This is how we can leverage this feature of ASP.NET MVC to developer a better application. I’ve used the iTextSharp library to convert to a pdf format. Mike gives quite a bit of detail regarding this library here. You can download the sample code here. (You’ll get an option to download once you open the link). Verdict: Hot chocolate: $3; Reebok shoes: $50; Your first car: $3000; Being able to extend a web application: Priceless.

    Read the article

  • Connecting to MSP430 via /dev/ttyACM0

    - by speciousfool
    I'd like some suggestions about how to fix garbled serial output from a device connected on /dev/ttyACM0. Lately I've been working on a development project making use of TI's MSP430 microcontroller (specifically the eZ430-RF2560). Over on this thread you can see we've been testing some code and have found that the output of the microcontroller over serial is garbled. The btstack provides simple counter test program. When we run the program and look at the serial port output using PuTTY on Windows 7 we see: rfcomm_send_internal cid 117 doesn't exist! BTstack counter 26230 rfcomm_send_internal cid 117 doesn't exist! BTstack counter 26231 However if we connect from various Ubuntu clients we get something like: Stt.R. BTacn 0 BTacn 002BTacn 0 BTcct 04BTtacoe 5BTacun My current belief is that this is because the device is being detected by cdc_acm as a generic USB ACM device. Another thread about a similar microcontroller suggests that the device should use a specific usb serial driver. We've verified that the module is compiled on our system and did a "modprobe ti_usb_3410_5052" but this had no effect on cdc_acm. Here is the relevant section of the kernel's debug log: [ 2735.092987] usb 2-1.2: new full speed USB device number 5 using ehci_hcd [ 2735.213655] cdc_acm 2-1.2:1.0: This device cannot do calls on its own. It is not a modem. [ 2735.213669] cdc_acm 2-1.2:1.0: No union descriptor, testing for castrated device [ 2735.213720] cdc_acm 2-1.2:1.0: ttyACM0: USB ACM device [ 2745.241996] generic-usb 0003:0451:F432.0003: usb_submit_urb(ctrl) failed [ 2745.242023] generic-usb 0003:0451:F432.0003: timeout initializing reports [ 2745.242401] generic-usb 0003:0451:F432.0003: hiddev0,hidraw0: USB HID v1.01 Device [Texas Instruments Texas Instruments MSP-FET430UIF] on usb-0000:00:1d.0-1.2/input1 So, in summary, we'd like to figure out how to properly connect to this device. Also of use may be the appropriate place to file a bug report.

    Read the article

  • Install RT Failed: DateTime >= 0.44 ...MISSING

    - by javano
    I am trying to install RT-4.0.5 (Request Tracker) but I keep getting the following output; $ make fixdeps <output cut> SOME DEPENDENCIES WERE MISSING. CORE missing dependencies: DateTime >= 0.44 ...MISSING make: *** [fixdeps] Error 1 The full output is here (it's quite long); http://pastebin.com/raw.php?i=Tn7GrkYw $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 8.04.4 LTS Release: 8.04 Codename: hardy $ perl --version This is perl 5, version 14, subversion 2 (v5.14.2) built for i686-linux $ cpan --version /usr/local/bin/cpan version 1.57 calling Getopt::Std::getopts (version 1.06 [paranoid]), running under Perl version 5.14.2. [Now continuing due to backward compatibility and excessive paranoia. See ``perldoc Getopt::Std'' about $Getopt::Std::STANDARD_HELP_VERSION.] Nothing to install! I can't see why this is a problem; $ cpan DateTime Going to read '/root/.cpan/Metadata' Database was generated on Thu, 08 Mar 2012 16:11:26 GMT DateTime is up to date (0.72).

    Read the article

  • Can static methods be called using object/instance in .NET

    Ans is Yes and No   Yes in C++, Java and VB.NET No in C#   This is only compiler restriction in c#. You might see in some websites that we can break this restriction using reflection and delegates, but we can’t, according to my little research J I shall try to explain you…   Following is code sample to break this rule using reflection, it seems that it is possible to call a static method using an object, p1 using System; namespace T {     class Program     {         static void Main()         {             var p1 = new Person() { Name = "Smith" };             typeof(Person).GetMethod("TestStatMethod").Invoke(p1, new object[] { });                     }         class Person         {             public string Name { get; set; }             public static void TestStatMethod()             {                 Console.WriteLine("Hello");             }         }     } } but I do not think so this method is being called using p1 rather Type Name “Person”. I shall try to prove this… look at another example…  Test2 has been inherited from Test1. Let’s see various scenarios… Scenario1 using System; namespace T {     class Program     {         static void Main()         {             Test1 t = new Test1();            typeof(Test2).GetMethod("Method1").Invoke(t,                                  new object[] { });         }     }     class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }       class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } } Output:   At test1::Method2 Scenario2         static void Main()         {             Test2 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                                          new object[] { });         }   Output:   At test1::Method2   Scenario3         static void Main()         {             Test1 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                             new object[] { });         }   Output: At test1::Method2 In all above scenarios output is same, that means, Reflection also not considering the object what you pass to Invoke method in case of static methods. It is always considering the type which you specify in typeof(). So, what is the use passing instance to “Invoke”. Let see below sample using System; namespace T {     class Program     {         static void Main()         {            typeof(Test2).GetMethod("Method1").                Invoke(null, new object[] { });         }     }       class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }     class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } }   Output is   At test1::Method2   I was able to call Invoke “Method1” of Test2 without any object.  Yes, there no wonder here as Method1 is static. So we may conclude that static methods cannot be called using instances (only in c#) Why Microsoft has restricted it in C#? Ans: Really there Is no use calling static methods using objects because static methods are stateless. but still Java and C++ latest compilers allow calling static methods using instances. Java sample class Test {      public static void main(String str[])      {            Person p = new Person();            System.out.println(p.GetCount());      } }   class Person {   public static int GetCount()   {      return 100;   } }   Output          100 span.fullpost {display:none;}

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

  • How do I enable JPEG Support for PHP?

    - by ngache
    My Configure Command doesn't say anything about jpg, nor gif/png, but I can see gif/png support in the output of phpinfo(). I built PHP with --with-gd, but only GIF Support and PNG Support are in the output of phpinfo(), how do I enable JPEG Support? UPDATE I got this problem when compiling : Sorry, I cannot run apxs. Possible reasons follow: 1. Perl is not installed 2. apxs was not found. Try to pass the path using --with-apxs2=/path/to/apxs 3. Apache was not built using --enable-so (the apxs usage page is displayed) The output of /usr/local/apache2/bin/apxs follows: cannot open /usr/local/apache2/build/config_vars.mk: No such file or directory at /usr/local/apache2/bin/apxs line 218. What should I do now?

    Read the article

  • Variable parsing with Bash and wget

    - by Bill Westrup
    I'm attempting to use wget in a simple bash script to grab a jpeg image from an Axis camera. This script outputs a file named JPEGOUT, instead of the desired output, which should be a timestamp jpeg (ex: 201209292040.jpg) . Changing the variable in the wget statement from JPEGOUT to $JPEGOUT makes wget fail with "wget: missing URL" error. The weird thing is wget parses the $IP vairable correctly. No luck on the output file name. I've tried single quotes, double quotes, parenthesis: all to no luck. Here's the script !/bin/bash IP=$1 JPEGOUT= date +%Y%m%d%H%M.jpg wget -O JPEGOUT http://$IP/axis-cgi/jpg/image.cgi?resolution=640x480&compression=25 Any ideas on how to get the output file name to parse correctly?

    Read the article

  • dual boot Windows 7 and Ubuntu 11.04, black screen when loading Windows

    - by Sean
    I am proficient with Windows and not so much with Linux. Here is my story: Original system came with Windows 7, got openSUSE installed on the second hard drive, and dual boot for this setup worked fine. Wanted to switch to Windows 7 and Ubuntu 11.04 dual boot so I did a Windows system recovery and it appeared to give me back a fresh Windows 7 install. I then go to install Ubuntu 11.04 and the installer informs me I have multiple operating systems already installed. I go to the advanced partitioning option and sure enough Windows 7 is on /sda while openSUSE is still on /sdb. From here I followed this guide (How to dual-boot Linux and Ubuntu with two hard drives) after I had deleted all the openSUSE partitions on /sdb through the Allocate Drive Space tab of the installer. I make the /boot, swap, /, and /home partitions and set the GRUB into the MBR of the second disk (/dev/sdb). Everything installs fine. I reboot, Windows loads automatically, install EasyBCD and add an entry for Ubuntu into the Windows Boot Manager while assigning the type as GRUB2. Reboot the system and it now shows dual booting options for both Windows and Ubuntu. Problem is: while I can use Ubuntu fine when I try to boot into Windows it just gives me a black screen and after a little while the fans start running crazy. If I restart the computer I will sometimes get the message that my system was put into hibernation mode because the temperature got too high (90C) which I presume is in accordance with the fans going crazy. I have linked the output from the Boot Info Script below, any suggestions on how to fix this issue would be greatly appreciated! UPDATED SCRIPT OUTPUT Boot Info Script output: http://paste.ubuntu.com/682152/

    Read the article

  • BIP Debugging to a file

    - by Tim Dexter
    If you use the standalone server or with OBIEE and use OC4J as the web server. Have you ever taken a looksee at the console window (doc/xterm) that you use to start it. Ever turned on debugging to see masses of info flow by that window and want to capture it all? I have been debugging today and watched all that info fly by and on Windoze gets lost before you can see it! The BIP developers use the System.out.println() and System.err.println()methods in the BIP applications to generate debugging formation. Normally the output from these method calls go to the console where the OC4J process is started. However you can specify command line options when starting OC4J to direct the stdout and stderr output directly to files. The ?out and ?err parameters tell OC4J which file to direct the output to. All you need do is modify the oc4j.cmd file used to start BIP. I didnt get fancy and just plugged in the following to the file under the start section. I just modified the line: set CMDARGS=-config "%SERVER_XML%" -userThreads to set CMDARGS=-config "%SERVER_XML%" -out D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.out -err D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.err -userThreads Bounced the server and I now have a ballooning pair of debug files that I can pour over to my hearts content. The .out file appears to contain BIP only log info and the .err file, OBIEE messages. If you are using another web server to host BIP, just check out the user docs to find out how to get the log files to write. Note to self, remember to turn off the debug when Im done!

    Read the article

  • Wireless clients have no route to ethernet clients in OpenWrt router

    - by superjoe30
    I'm using OpenWrt Kamikaze 8.09 on a Linksys WRT54g v1.1 router. I just flashed it with default settings and got everything working, except my wireless laptop cannot ping my desktop which is wired to the router. What can I do to fix this? (My desktop can ping other desktops wired to the router) My routing table: config 'defaults' option 'syn_flood' '1' option 'input' 'ACCEPT' option 'output' 'ACCEPT' option 'forward' 'REJECT' config 'zone' option 'name' 'lan' option 'input' 'ACCEPT' option 'output' 'ACCEPT' option 'forward' 'REJECT' config 'zone' option 'name' 'wan' option 'input' 'REJECT' option 'output' 'ACCEPT' option 'forward' 'REJECT' option 'masq' '1' config 'forwarding' option 'src' 'lan' option 'dest' 'wan' option 'mtu_fix' '1' config 'redirect' option 'src' 'wan' option '_name' 'ssh' option 'proto' 'tcp' option 'src_dport' '22' option 'dest_ip' '192.168.1.100' option 'dest_port' '22' config 'redirect' option 'src' 'wan' option '_name' 'http' option 'proto' 'tcp' option 'src_dport' '8888' option 'dest_ip' '192.168.1.100' option 'dest_port' '8888'

    Read the article

  • MacBook Pro Boot Camp SPDIF passthrough?

    - by Ryan Zink
    I'm using Windows 7 through Boot Camp on a unibody Macbook Pro and am having problems using the SPDIF output. I get the expected Dolby Digital or DTS in some movies, but in other movies and in games (Source engine, StarCraft 2) where the output is enabled to 5.1, the output invariably shows up as Dolby Pro Logic, which means (I think) that passthrough is not enabled. The boot camp drivers for the sound card don't have any sort of control panel, and the Windows settings for enabling DTS and Dolby seem to work when I test those outputs in the sound settings. Is there some other setting or utility I can use to enable SPDIF passthrough for all programs?

    Read the article

  • Google I/O 2011: GIS with Google Earth and Google Maps

    Google I/O 2011: GIS with Google Earth and Google Maps Josh Livni, Mano Marks Building a robust interactive map with a lot of data involves more than just adding a few placemarks. We'll talk about integrating with existing GIS software, importing data from shapefiles and other formats, map projections, and techniques for managing, analyzing, and rendering large datasets. From: GoogleDevelopers Views: 3785 19 ratings Time: 52:25 More in Science & Technology

    Read the article

  • Make the Web Fast: Google Web Fonts - making pretty, fast!

    Make the Web Fast: Google Web Fonts - making pretty, fast! Join us for a technical deep-dive on Web Fonts: how they work, the data formats, performance optimizations, and tips and tricks for making your site both fast and pretty at the same time - turns out, these two goals are not mutually exclusive! From: GoogleDevelopers Views: 468 69 ratings Time: 01:11:43 More in Science & Technology

    Read the article

  • Why am I getting this "Connection to PulseAudio failed" error?

    - by Dave M G
    I have a computer that runs Mythbuntu 11.10. It has an external USB Kenwood Digital Audio device. When I open up pavucontrol, I get this message: If I do as the message suggests and run start-pulseaudio-x11, I get this output: $ start-pulseaudio-x11 Connection failure: Connection refused pa_context_connect() failed: Connection refused How do I correct this error? Update: Somewhere during the course of doing the suggested tests in the comments, a new audio device has now become visible in my sound settings. I have not attached or made any new device, so this must be the result of of some setting change. The device I use and know about is the Kenwood Audio device. The "GF108" device will play sound through the Kenwood anyway, but not reliably: Command line output as requested in the comments: $ ls -l ~/.pulse* -rw------- 1 mythbuntu mythbuntu 256 Feb 28 2011 /home/mythbuntu/.pulse-cookie /home/mythbuntu/.pulse: total 200 -rw-r--r-- 1 mythbuntu mythbuntu 8192 Oct 23 01:38 2b98330d36bf53bb85c97fc300000008-card-database.tdb -rw-r--r-- 1 mythbuntu mythbuntu 69 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-sink -rw-r--r-- 1 mythbuntu mythbuntu 68 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-source -rw-r--r-- 1 mythbuntu mythbuntu 49152 Oct 14 12:30 2b98330d36bf53bb85c97fc300000008-device-manager.tdb -rw-r--r-- 1 mythbuntu mythbuntu 61440 Oct 23 01:40 2b98330d36bf53bb85c97fc300000008-device-volumes.tdb lrwxrwxrwx 1 mythbuntu mythbuntu 23 Nov 16 22:50 2b98330d36bf53bb85c97fc300000008-runtime -> /tmp/pulse-EAwvLIQZn7e8 -rw-r--r-- 1 mythbuntu mythbuntu 77824 Nov 1 12:54 2b98330d36bf53bb85c97fc300000008-stream-volumes.tdb And yet more requested command line output: $ ps auxw|grep pulse 1000 2266 0.5 0.2 294184 9152 ? S<l Nov16 4:26 pulseaudio -D 1000 2413 0.0 0.0 94816 3040 ? S Nov16 0:00 /usr/lib/pulseaudio/pulse/gconf-helper 1000 4875 0.0 0.0 8108 908 pts/0 S+ 12:15 0:00 grep --color=auto pulse

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >