Search Results

Search found 4740 results on 190 pages for 'split mirror'.

Page 161/190 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • How to define a batching routing service in WCF

    - by mattx
    I have designed a custom silverlight wcf channel that I want to leverage to selectively batch calls from the client to the server and possibly to cache on the client and short circuit calls. So far I'm just using this channel as a transport and sending the generic WCF messages that result to the WCF router service example here http://msdn.microsoft.com/en-us/magazine/cc500646.aspx?pr=blog to prototype this on the server side. So my scenario looks like this: IFooClient-MyTransportChannel-IRouterService-IFooService-Return I now need to be able to send more than one message per call through the router and carve them up and service them on the server side. Since this is just an experiment and I'm taking baby steps I will dispatch and service all the messages right away on the server side and return the batch of results. Immediately I noticed that simply making the router interface take Message[] instead of Message doesn't work due to serialization problems. I guess this makes sense. I'm not sure soap envelopes can contain other soap envelopes etc. Is there a simple way to take a collection of WCF Message objects and send them to a single method on a service where they can be split up and forwarded as appropriate? If not I'd love suggestions on how I should approach this. I want to have minimal work to do on the router service side so the goal should be to get as close to being able to "slice and forward" as possible.

    Read the article

  • How to deploy a single webapp with multiple web-modules that may be removed or added individually

    - by Daniel Bleisteiner
    We currently run two separate webapps (WARs) deployed in one single EAR containing additional JARs and settings. To improve our deployment I want to split one of these webapps into different modules that may be build and packaged individually. But I've currently no clue on how to package these modules so that I'm able to add or remove them as desired - at best during runtime. The webapp is getting more and more complex and I'd like to separate some of the functionality into modules. These modules should be packaged as single archives. As long as they contain only classes and resources loaded through code I know how to do this (simple JARs). But how about JSPs? Normally a WAR file contains JSPs or HTML files. I my case it are JSF pages utilizing JBoss Seam and RichFaces. These modules will add classes, resources and JSF pages and other includes to the running webapplication. Is it somehow possible to deploy them as individual archives to serve the same running webapp? We are using Maven for our build and packaging and deploy into JBoss v4.

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • is there a better way to write this frankenstein LINQ query that searches for values in a child tabl

    - by MRV
    I have a table of Users and a one to many UserSkills table. I need to be able to search for users based on skills. This query takes a list of desired skills and searches for users who have those skills. I want to sort the users based on the number of desired skills they posses. So if a users only has 1 of 3 desired skills he will be further down the list than the user who has 3 of 3 desired skills. I start with my comma separated list of skill IDs that are being searched for: List<short> searchedSkillsRaw = skills.Value.Split(',').Select(i => short.Parse(i)).ToList(); I then filter out only the types of users that are searchable: List<User> users = (from u in db.Users where u.Verified == true && u.Level > 0 && u.Type == 1 && (u.UserDetail.City == city.SelectedValue || u.UserDetail.City == null) select u).ToList(); and then comes the crazy part: var fUsers = from u in users select new { u.Id, u.FirstName, u.LastName, u.UserName, UserPhone = u.UserDetail.Phone, UserSkills = (from uskills in u.UserSkills join skillsJoin in configSkills on uskills.SkillId equals skillsJoin.ValueIdInt into tempSkills from skillsJoin in tempSkills.DefaultIfEmpty() where uskills.UserId == u.Id select new { SkillId = uskills.SkillId, SkillName = skillsJoin.Name, SkillNameFound = searchedSkillsRaw.Contains(uskills.SkillId) }), UserSkillsFound = (from uskills in u.UserSkills where uskills.UserId == u.Id && searchedSkillsRaw.Contains(uskills.SkillId) select uskills.UserId).Count() } into userResults where userResults.UserSkillsFound > 0 orderby userResults.UserSkillsFound descending select userResults; and this works! But it seems super bloated and inefficient to me. Especially the secondary part that counts the number of skills found. Thanks for any advice you can give. --r

    Read the article

  • Save Xml in an Excel cell value causes ComException

    - by mas_oz2k1
    I am trying to save an object (Class1) as string in a cell value. My issue is that from time to time I have a ComException: HRESULT: 0x8007000E (E_OUTOFMEMORY) (It is kind of random but I have not identified any particular pattern yet) when I write the value into a cell. Any ideas will be welcome For illustration purposes: Let Class1 be the class to be converted to an Xml string. (Notice that I removed the xml declaration at the start of the string to avoid having the preamble present- non printable character) <Class1 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" <ElementID HL690375</ElementID </Class1" Class1 myClass = new Class1(); this class is converted to a string s. s= ConvertObjectToXmlString(myClass); then s is assigned to a cell Range r = Application.ActiveCell; r.Value2 = s; Note: (1) If the string is too big, I limit it to 32000 chars and split the string in chunks of 32000 chars and save the chunks in multiple cells. (2) I do not to quote the string before adding to a cell. Do I need to? If so how it can be done? (3) All object contents are English. (4) C# code sample will be great but VB.net code is OK.

    Read the article

  • Uva's 3n+1 problem

    - by dmindreader
    I'm solving Uva's 3n+1 problem and I don't get why the judge is rejecting my answer. The time limit hasn't been exceeded and the all test cases I've tried have run correctly so far. import java.io.*; public class NewClass{ /** * @param args the command line arguments */ public static void main(String[] args) throws IOException { int maxCounter= 0; int input; int lowerBound; int upperBound; int counter; int numberOfCycles; int maxCycles= 0; int lowerInt; BufferedReader consoleInput = new BufferedReader(new InputStreamReader(System.in)); String line = consoleInput.readLine(); String [] splitted = line.split(" "); lowerBound = Integer.parseInt(splitted[0]); upperBound = Integer.parseInt(splitted[1]); int [] recentlyused = new int[1000001]; if (lowerBound > upperBound ) { int h = upperBound; upperBound = lowerBound; lowerBound = h; } lowerInt = lowerBound; while (lowerBound <= upperBound) { counter = lowerBound; numberOfCycles = 0; if (recentlyused[counter] == 0) { while ( counter != 1 ) { if (recentlyused[counter] != 0) { numberOfCycles = recentlyused[counter] + numberOfCycles; counter = 1; } else { if (counter % 2 == 0) { counter = counter /2; } else { counter = 3*counter + 1; } numberOfCycles++; } } } else { numberOfCycles = recentlyused[counter] + numberOfCycles; counter = 1; } recentlyused[lowerBound] = numberOfCycles; if (numberOfCycles > maxCycles) { maxCycles = numberOfCycles; } lowerBound++; } System.out.println(lowerInt +" "+ upperBound+ " "+ (maxCycles+1)); } }

    Read the article

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • Could I do this blind relative to absolute path conversion (for perforce depot paths) better?

    - by wonderfulthunk
    I need to "blindly" (i.e. without access to the filesystem, in this case the source control server) convert some relative paths to absolute paths. So I'm playing with dotdots and indices. For those that are curious I have a log file produced by someone else's tool that sometimes outputs relative paths, and for performance reasons I don't want to access the source control server where the paths are located to check if they're valid and more easily convert them to their absolute path equivalents. I've gone through a number of (probably foolish) iterations trying to get it to work - mostly a few variations of iterating over the array of folders and trying delete_at(index) and delete_at(index-1) but my index kept incrementing while I was deleting elements of the array out from under myself, which didn't work for cases with multiple dotdots. Any tips on improving it in general or specifically the lack of non-consecutive dotdot support would be welcome. Currently this is working with my limited examples, but I think it could be improved. It can't handle non-consecutive '..' directories, and I am probably doing a lot of wasteful (and error-prone) things that I probably don't need to do because I'm a bit of a hack. I've found a lot of examples of converting other types of relative paths using other languages, but none of them seemed to fit my situation. These are my example paths that I need to convert, from: //depot/foo/../bar/single.c //depot/foo/docs/../../other/double.c //depot/foo/usr/bin/../../../else/more/triple.c to: //depot/bar/single.c //depot/other/double.c //depot/else/more/triple.c And my script: begin paths = File.open(ARGV[0]).readlines puts(paths) new_paths = Array.new paths.each { |path| folders = path.split('/') if ( folders.include?('..') ) num_dotdots = 0 first_dotdot = folders.index('..') last_dotdot = folders.rindex('..') folders.each { |item| if ( item == '..' ) num_dotdots += 1 end } if ( first_dotdot and ( num_dotdots > 0 ) ) # this might be redundant? folders.slice!(first_dotdot - num_dotdots..last_dotdot) # dependent on consecutive dotdots only end end folders.map! { |elem| if ( elem !~ /\n/ ) elem = elem + '/' else elem = elem end } new_paths << folders.to_s } puts(new_paths) end

    Read the article

  • Run command with space characters in bash script

    - by ??iu
    I have a file that contains a list of files: 02 of Clubs.eps 02 of Diamonds.eps 02 of Hearts.eps 02 of Spades.eps ... I am attempting to mass-convert these to png format in several sizes. The script I am using to do this is: while read -r line do for i in 80 35 200 do convert $(sed 's/ /\\ /g' <<< Cards/${line}) -size ${i}x${i} ../img/card/$(basename $(tr ' ' '_' <<< ${line} | tr '[A-Z]' '[a-z]') .eps)_${i}.png; done done < card_list.txt However, this doesn't work, apparently trying to split on each word, resulting in the following error output: convert: unable to open image `Cards/02\': No such file or directory @ error/blob.c/OpenBlob/2514. convert: no decode delegate for this image format `Cards/02\' @ error/constitute.c/ReadImage/532. convert: unable to open image `of\': No such file or directory @ error/blob.c/OpenBlob/2514. convert: no decode delegate for this image format `of\' @ error/constitute.c/ReadImage/532. convert: unable to open image `Clubs.eps': No such file or directory @ error/blob.c/OpenBlob/2514. If I change the convert to an echo the result looks right and if I copy a line and run it myself in the shell it works fine: convert Cards/02\ of\ Clubs.eps -size 80x80 ../img/card/02_of_clubs_80.png convert Cards/02\ of\ Clubs.eps -size 35x35 ../img/card/02_of_clubs_35.png convert Cards/02\ of\ Clubs.eps -size 200x200 ../img/card/02_of_clubs_200.png convert Cards/02\ of\ Diamonds.eps -size 80x80 ../img/card/02_of_diamonds_80.png convert Cards/02\ of\ Diamonds.eps -size 35x35 ../img/card/02_of_diamonds_35.png convert Cards/02\ of\ Diamonds.eps -size 200x200 ../img/card/02_of_diamonds_200.png convert Cards/02\ of\ Hearts.eps -size 80x80 ../img/card/02_of_hearts_80.png convert Cards/02\ of\ Hearts.eps -size 35x35 ../img/card/02_of_hearts_35.png convert Cards/02\ of\ Hearts.eps -size 200x200 ../img/card/02_of_hearts_200.png convert Cards/02\ of\ Spades.eps -size 80x80 ../img/card/02_of_spades_80.png UPDATE: Just adding quotes (see below) has the same result as the above, where I had been using sed to add backslashes convert '"'Cards/${line}'"' -size ${i}x${i} ../img/card/$(basename $(tr ' ' '_' <<< ${line} | tr '[A-Z]' '[a-z]') .eps)_${i}.png; I've tried both double and single quotes

    Read the article

  • csv to hash data structure conversion using perl

    - by Kavya S
    1. Convert a .csv file to perlhash data structure Format of a .csv file: sw,s1,s2,s3,s4 ver,v1,v2,v3,v4 msword,v2,v3,v1,v1 paint,v4,v2,v3,v3 outlook,v1,v1,v3,v2 my perl script: #!/usr/local/bin/perl use strict; use warnings; use Data::Dumper; my %hash; open my $fh, '<', 'some_file.csv' or die "Cannot open: $!"; while (my $line = <$fh>) { $line =~ s/,,/-/; chomp ($line); my @array = split /,/, $line; my $key = shift @array; $hash{$key} = $line; $hash{$key} = \@array; } print Dumper(\%hash); close $fh; perl hash i.e output should look like: $sw_ver_db = { s1 => { msword => {ver => v2}, paint => {ver => v4}, outlook => {ver => v1}, }, s2 => { msword => {ver => v3}, paint => {ver => v2}, outlook => {ver => v1}, }, s3 => { msword => {ver =>v1}, paint => {ver =>v3}, outlook => {ver =>v3}, }, s4 => { msword => {ver =>v1}, paint => {ver =>v3}, outlook => {ver =>v2}, }, };

    Read the article

  • JQuery autocomplete problem

    - by heffaklump
    Im using JQuerys Autocomplete plugin, but it doesn't autocomplete upon entering anything. Any ideas why it doesnt work? The basic example works, but not mine. var ppl = {"ppl":[{"name":"peterpeter", "work":"student"}, {"name":"piotr","work":"student"}]}; var options = { matchContains: true, // So we can search inside string too minChars: 2, // this sets autocomplete to begin from X characters dataType: 'json', parse: function(data) { var parsed = []; data = data.ppl; for (var i = 0; i < data.length; i++) { parsed[parsed.length] = { data: data[i], // the entire JSON entry value: data[i].name, // the default display value result: data[i].name // to populate the input element }; } return parsed; }, // To format the data returned by the autocompleter for display formatItem: function(item) { return item.name; } }; $('#inputplace').autocomplete(ppl, options); Ok. Updated: <input type="text" id="inputplace" /> So, when entering for example "peter" in the input field. No autocomplete suggestions appear. It should give "peterpeter" but nothing happens. And one more thing. Using this example works perfectly. var data = "Core Selectors Attributes Traversing Manipulation CSS Events Effects Ajax Utilities".split(" "); $("#inputplace").autocomplete(data);

    Read the article

  • Java: which configuration framework to use?

    - by Laimoncijus
    Hi, I need to decide which configuration framework to use. At the moment I am thinking between using properties files and XML files. My configuration needs to have some primitive grouping, e.g. in XML format would be something like: <configuration> <group name="abc"> <param1>value1</param1> <param2>value2</param2> </group> <group name="def"> <param3>value3</param3> <param4>value4</param4> </group> </configuration> or a properties file (something similar to log4j.properties): group.abc.param1 = value1 group.abc.param2 = value2 group.def.param3 = value3 group.def.param4 = value4 I need bi-directional (read and write) configuration library/framework. Nice feature would be - that I could read out somehow different configuration groups as different objects, so I could later pass them to different places, e.g. - reading everything what belongs to group "abc" as one object and "def" as another. If that is not possible I can always split single configuration object into smaller ones myself in the application initialization part of course. Which framework would best fit for me?

    Read the article

  • How to change value inside a JSON string.

    - by Jeremy Roy
    I have a JSON string array of objects like this. [{"id":"4","rank":"adm","title":"title 1"}, {"id":"2","rank":"mod","title":"title 2"}, {"id":"5","rank":"das","title":"title 3"}, {"id":"1","rank":"usr","title":"title 4"}, {"id":"3","rank":"ref","title":"title 5"}] I want to change the title value of it, once the id is matching. So if my variable myID is 5, I want to change the title "title 5" to new title, and so on. And then I get the new JSON array to $("#rangArray").val(jsonStr); Something like $.each(jsonStr, function(k,v) { if (v==myID) { this.title='new title'; $("#myTextArea").val(jsonStr); } }); Here is the full code. $('img.delete').click(function() { var deltid = $(this).attr("id").split('_'); var newID = deltid[1]; var jsonStr = JSON.stringify(myArray); $.each(jsonStr, function(k,v) { if (v==newID) { // how to change the title jsonStr[k].title = 'new title'; alert(jsonStr); $("#rangArray").val(jsonStr); } }); }); The above is not working. Any help please?

    Read the article

  • 500 internal server error at form connection

    - by klox
    hi..all..i've a problem i can't connect to database what's wrong with my code?this is my code: $("#mod").change(function() { var barcode; barCode=$("#mod").val(); var data=barCode.split(" "); $("#mod").val(data[0]); $("#seri").val(data[1]); var str=data[0]; var matches=str.match(/(EE|[EJU]).*(D)/i); $.ajax({ type:"post", url:"process1.php", data:"value="+matches+"action=tunermatches", cache:false, async:false, success: function(res){ $('#rslt').replaceWith( "<div id='value'><h6>Tuner range is" + res + " .</h6></div>" ); } }); }); and this is my process file: switch(postVar('action')) { case 'tunermatches' : tunermatches(postVar('tuner')); break; function tunermatches($tuner)){ $Tuner=mysql_real_escape_string($tuner); $sql= "SELECT remark FROM settingdata WHERE itemname="Tuner_range" AND itemdata="$Tunermatches"; $res=mysql_query($sql); $dat=mysql_fetch_array($res,MYSQL_NUM); if($dat[0]>0) { echo $dat[0]; } mysql_close($dbc); }

    Read the article

  • Threading calls to web service in a web service - (.net 2.0)

    - by Ryan Ternier
    Got a question regarding best practices for doing parallel web service calls, in a web service. Our portal will get a message, split that message into 2 messages, and then do 2 calls to our broker. These need to be on separate threads to lower the timeout. One solution is to do something similar to (pseudo code): XmlNode DNode = GetaGetDemoNodeSomehow(); XmlNode ENode = GetAGetElNodeSomehow(); XmlNode elResponse; XmlNode demResponse; Thread dThread = new Thread(delegate { //Web Service Call GetDemographics d = new GetDemographics(); demResponse = d.HIALRequest(DNode); }); Thread eThread = new Thread(delegate { //Web Service Call GetEligibility ge = new GetEligibility(); elResponse = ge.HIALRequest(ENode); }); dThread.Start(); eThread.Start(); dThread.Join(); eThread.Join(); //combine the resulting XML and return it. //Maybe throw a bit of logging in to make architecture happy Another option we thought of is to create a worker class, and pass it the service information and have it execute. This would allow us to have a bit more control over what is going on, but could add additional overhead. Another option brought up would be 2 asynchronous calls and manage the returns through a loop. When the calls are completed (success or error) the loop picks it up and ends. The portal service will be called about 50,000 times a day. I don't want to gold plate this sucker. I'm looking for something light weight. The services that are being called on the broker do have time out limits set, and are already heavily logged and audited, so I'm not worried on that part. This is .NET 2.0 , and as much as I would love to upgrade I can't right now. So please leave all the goodies of 2.0 out please.

    Read the article

  • how to append a string to next line in perl

    - by tprayush
    hi all , i have a requirement like this.. this just a sample script... $ cat test.sh #!/bin/bash perl -e ' open(IN,"addrss"); open(out,">>addrss"); @newval; while (<IN>) { @col_val=split(/:/); if ($.==1) { for($i=0;$i<=$#col_val;$i++) { print("Enter value for $col_val[$i] : "); chop($newval[$i]=<STDIN>); } $str=join(":"); $_="$str" print OUT; } else { exit 0; } } close(IN); close(OUT); ' when i run this scipt... $ ./test.sh Enter value for NAME : abc Enter value for ADDRESS : asff35 Enter value for STATE : XYZ Enter value for CITY : EIDHFF Enter value for CONTACT : 234656758 $ cat addrss NAME:ADDRESS:STATE:CITY:CONTACT abc:asff35:XYZ:EIDHFF:234656758 when ran it second time $ cat addrss NAME:ADDRESS:STATE:CITY:CONTACT abc:asff35:XYZ:EIDHFF:234656758ioret:56fgdh:ghdgh:afdfg:987643221 ## it is appended in the same line... i want it to be added to the next line..... NOTE: i want to do this by explitly using the filehandles in perl....and not with redirection operators in shell. please help me!!!

    Read the article

  • Cannot add an entity that already exists.

    - by mazhar
    Code: public ActionResult Create(Group group) { if (ModelState.IsValid) { group.int_CreatedBy = 1; group.dtm_CreatedDate = DateTime.Now; var Groups = Request["Groups"]; int GroupId = 0; GroupFeature GroupFeature=new GroupFeature(); foreach (var GroupIdd in Groups) { // GroupId = int.Parse(GroupIdd.ToString()); } var Features = Request["Features"]; int FeatureId = 0; int t = 0; int ids=0; string[] Feature = Features.Split(',').ToArray(); //foreach (var FeatureIdd in Features) for(int i=0; i<Feature.Length; i++) { if (int.TryParse(Feature[i].ToString(), out ids)) { GroupFeature.int_GroupId = 35; GroupFeature.int_FeaturesId = ids; if (ids != 0) { GroupFeatureRepository.Add(GroupFeature); GroupFeatureRepository.Save(); } } } return RedirectToAction("Details", new { id = group.int_GroupId }); } return View(); } I am getting an error here Cannot add an entity that already exists. at this line GroupFeatureRepository.Add(GroupFeature); GroupFeatureRepository.Save();

    Read the article

  • IE7 Problem with sIFR when <br> is inside an H3

    - by David Fox
    I have a problem I just discovered when viewing certain pages in IE7. If I have a very long header that wraps to a second line, or worse, if I put a BR in the middle, that throws off the spacing. One page to look at: broken example1 You'll notice that the margin at the top of the page gets offset as the headings are rendered, throwing everything off. I'm using code like this: <h3 style="margin:0"><a href="../books/msc1.html">Middle School Confidential™<br> Book 1: Be Confident in Who You Are</a></h3> but repeated many times to exaggerate the problem. I tried another test where I removed the BR and let the lines wrap naturally. This is an improvement in terms of the spacing, but it doesn't fix the problem. (Same URL but make it m1.html) In the third example, each heading takes up only one line (m2.html) One option would be to just split up the heading onto two lines, each with its on H tags. But since these are links, then it will appear that the first line might go to one place, and the second to another, since they wouldn't change color simultaneously as you roll over them. So, any solutions to this? I believe I have the current version of sIFR 3. I don't want to upgrade to IE8 until I know this is resolved. Thanks!

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • Splitting 25mb .txt file into smaller files using text delimiter

    - by user574141
    Regards, SO I am new to python and Perl. I have been trying to solve a simple problem and getting tied in knots with syntax. I hope someone has the time and patience to help. I have a 25mb file in ".txt" format which contains news-wire articles going back to 1970. Each news story is concatenated to the next, with only the "Copyright" statement to delimit. Each news story starts with "Item XX of XXX DOCUMENTS". There are certain metadata that are repeated throughout, I will use these for tagging later on. I wish to split this 25mb file into separate .txt files, each containing one news story (i.e. the text between "DOCUMENTS" and "Copyright", saving each with a different name (obviously). I am trying to 1 ) open the file... 2) iterate over lines in the file checking for the eof delimiter, and if it is not present writing the line to a list 3)write that list to a seperate small file. I'm having big problems with changing filenames using the counter, and how do I make Python start from where I left off, is the "seek" function appropriate? so far I have been trying this approach, completely unsuccessfully: myfile = open ("myfile.txt", 'r') filenumber = 0 for line in myfile.readline(): filenumber += 1 w=0 while myfile.readline() != '\s+DOCUMENTS\s*\n' ### read my line into a list mysmallfile()['w'] = [myfile.readline()] w += 1 output = open('C:\\Users\\dunner7\\Documents\###how do I change the filename each iteration???', 'w') output.writelines(mysmallfile) ###go back to start. Thank you for your time and patience. RD

    Read the article

  • Multithreading A Function in VB.Net

    - by Ben
    I am trying to multi thread my application so as it is visible while it is executing the process, this is what I have so far: Private Sub SendPOST(ByVal URL As String) Try Dim DataBytes As Byte() = Encoding.ASCII.GetBytes("") Dim Request As HttpWebRequest = TryCast(WebRequest.Create(URL.Trim & "/webdav/"), HttpWebRequest) Request.Method = "POST" Request.ContentType = "application/x-www-form-urlencoded" Request.ContentLength = DataBytes.Length Request.Timeout = 1000 Request.ReadWriteTimeout = 1000 Dim PostData As Stream = Request.GetRequestStream() PostData.Write(DataBytes, 0, DataBytes.Length) Dim Response As WebResponse = Request.GetResponse() Dim ResponseStream As Stream = Response.GetResponseStream() Dim StreamReader As New IO.StreamReader(ResponseStream) Dim Text As String = StreamReader.ReadToEnd() PostData.Close() Catch ex As Exception If ex.ToString.Contains("401") Then TextBox2.Text = TextBox2.Text & URL & "/webdav/" & vbNewLine End If End Try End Sub Public Sub G0() Dim siteSplit() As String = TextBox1.Text.Split(vbNewLine) For i = 0 To siteSplit.Count - 1 Try If siteSplit(i).Contains("http://") Then SendPOST(siteSplit(i).Trim) Else SendPOST("http://" & siteSplit(i).Trim) End If Catch ex As Exception End Try Next End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim t As Thread t = New Thread(AddressOf Me.G0) t.Start() End Sub However, the 'G0' sub code is not being executed at all, and I need to multi thread the 'SendPOST' as that is what slows the application.

    Read the article

  • Calculating a consecutive streak in data

    - by Jura25
    I’m trying to calculate the maximum winning and losing streak in a dataset (i.e. the highest number of consecutive positive or negative values). I’ve found a somewhat related question here on StackOverflow and even though that gave me some good suggestions, the angle of that question is different, and I’m not (yet) experienced enough to translate and apply that information to this problem. So I was hoping you could help me out, even an suggestion would be great. My data set look like this: > subRes Instrument TradeResult.Currency. 1 JPM -3 2 JPM 264 3 JPM 284 4 JPM 69 5 JPM 283 6 JPM -219 7 JPM -91 8 JPM 165 9 JPM -35 10 JPM -294 11 KFT -8 12 KFT -48 13 KFT 125 14 KFT -150 15 KFT -206 16 KFT 107 17 KFT 107 18 KFT 56 19 KFT -26 20 KFT 189 > split(subRes[,2],subRes[,1]) $JPM [1] -3 264 284 69 283 -219 -91 165 -35 -294 $KFT [1] -8 -48 125 -150 -206 107 107 56 -26 189 In this case, the maximum (winning) streak for JPM is four (namely the 264, 284, 69 and 283 consecutive positive results) and for KFT this value is 3 (107, 107, 56). My goal is to create a function which gives the maximum winning streaks per instrument (i.e. JPM: 4, KFT: 3). To achieve that: R needs to compare the current result with the previous result, and if it is higher then there is a streak of at least 2 consecutive positive results. Then R needs to look at the next value, and if this is also higher: add 1 to the already found value of 2. If this value isn’t higher, R needs to move on to the next value, while remembering 2 as the intermediate maximum. I’ve tried cumsum and cummax in accordance with conditional summing (like cumsum(c(TRUE, diff(subRes[,2]) > 0))), which didn’t work out. Also rle in accordance with lapply (like lapply(rle(subRes$TradeResult.Currency.), function(x) diff(x) > 0)) didn’t work. How can I make this work?

    Read the article

  • performing more than one Where in query return null!!! why? how to fix this?

    - by Sadegh
    hi, i have wrote a method that filters output with provided query and return it. when one Where excuted; it return correct output but when more than one Where excuted; output is null and Exception occured with message "Enumeration yielded no results". why? how i can fix it? public IQueryable<SearchResult> PerformSearch(string query, int skip = 0, int take = 5) { if (!string.IsNullOrEmpty(query)) { var queryList = query.Split('+').ToList(); var results = GENERATERESULTS(); string key; foreach (string _q in queryList) { if (_q.StartsWith("(") && _q.EndsWith(")")) { key = _q.Replace("(", "").Replace(")", ""); results = results.Where(q => q.Title.Contains(key, StringComparison.CurrentCultureIgnoreCase)); } else if (_q.StartsWith("\"") && _q.EndsWith("\"")) { key = _q.Replace("\"", "").Replace("\"", ""); results = results.Where(q => q.Title.Contains(key, StringComparison.CurrentCulture)); } else if (_q.StartsWith("-(") && _q.EndsWith(")")) { key = _q.Replace("-(", "").Replace(")", ""); results = results.Where(q=> !q.Title.Contains(key, StringComparison.CurrentCultureIgnoreCase)); } else { key = _q; results = results.Where(q => q.Title.Contains(key, StringComparison.CurrentCulture)); } } this._Count = results.Count(); results = results.Skip(skip).Take(take); this._EndOn = DateTime.Now; this.ExecutionTime(); return results; } else return null; } thanks in advance ;)

    Read the article

  • How to create copying items from property values?

    - by Nam Gi VU
    Let's say I have a list of sub paths such as <PropertyGroup> <subPaths>$(path1)\**\*; $(path2)\**\*; $(path3)\file3.txt; </subPaths> </PropertyGroup> I want to copy these files from folder A to folder B (surely we already have all the sub folders/files in A). What I try was: <Target Name="Replace" DependsOnTargets="Replace_Init; Replace_Copy1Path"> </Target> <Target Name="Replace_Init"> <PropertyGroup> <subPaths>$(path1)\**\*; $(path2)\**\*; $(path3)\file3.txt; </subPaths> </PropertyGroup> <ItemGroup> <subPathItems Include="$(subPathFiles.Split(';'))" /> </ItemGroup> </Target> <Target Name="Replace_Copy1Path" Outputs="%(subPathItems.Identity)"> <PropertyGroup> <src>$(folderA)\%(subPathItems.Identity)</src> <dest>$(folderB)\%(subPathItems.Identity)</dest> </PropertyGroup> <Copy SourceFiles="$(src)" DestinationFiles="$(dest)" /> </Target> But the Copy task didn't work. It doesn't translate the *** to files. What did I do wrong? Please help!

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >