Search Results

Search found 9273 results on 371 pages for 'complex strings'.

Page 343/371 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Registry Problem

    - by Dominik
    I made a launcher for my game server. (World of Warcraft) I want to get the installpath of the game, browsed by the user. I'm using this code to browse, and get the installpath, then set some other strings from the installpath string, then just strore in my registry key. using System; using System.Drawing; using System.Reflection; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; using Microsoft.Win32; using System.IO; using System.Net.NetworkInformation; using System.Diagnostics; using System.Runtime; using System.Runtime.InteropServices; using System.Security; using System.Security.Cryptography; using System.Text; using System.Net; using System.Linq; using System.Net.Sockets; using System.Collections.Generic; using System.Threading; namespace WindowsFormsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } string InstallPath, WoWExe, PatchPath; private void Form1_Load(object sender, EventArgs e) { RegistryKey LocalMachineKey_Existence; MessageBox.Show("Browse your install location.", "Select Wow.exe"); OpenFileDialog BrowseInstallPath = new OpenFileDialog(); BrowseInstallPath.Filter = "wow.exe|*.exe"; if (BrowseInstallPath.ShowDialog() == DialogResult.OK) { InstallPath = System.IO.Path.GetDirectoryName(BrowseInstallPath.FileName); WoWExe = InstallPath + "\\wow.exe"; PatchPath = InstallPath + "\\Data\\"; LocalMachineKey_Existence = Registry.LocalMachine.CreateSubKey(@"SOFTWARE\ExistenceWoW"); LocalMachineKey_Existence.SetValue("InstallPathLocation", InstallPath); LocalMachineKey_Existence.SetValue("PatchPathLocation", PatchPath); LocalMachineKey_Existence.SetValue("WoWExeLocation", WoWExe); } } } } The problem is: On some computer, it doesnt stores like it should be. For example, your wow.exe is in C:\ASD\wow.exe, your select it with the browse windows, then the program should store it in the Existence registry key as C:\ASD\Data\ but it stores like this: C:\ASDData , so it forgots a backslash :S Look at this picture: http://img21.imageshack.us/img21/2829/regedita.jpg My program works cool on my PC, and on my friends pc, but on some pc this "bug" comes out :S I have windows 7, with .NEt 3.5 Please help me.

    Read the article

  • Copying metadata over a database link in Oracle 10g

    - by Tunde
    Thanks in advance for your help experts. I want to be able to copy over database objects from database A into database B with a procedure created on database B. I created a database link between the two and have tweaked the get_ddl function of the dbms_metadata to look like this: create or replace function GetDDL ( p_name in MetaDataPkg.t_string p_type in MetaDataPkg.t_string ) return MetaDataPkg.t_longstring is -- clob v_clob clob; -- array of long strings c_SYSPrefix constant char(4) := 'SYS_'; c_doublequote constant char(1) := '"'; v_longstrings metadatapkg.t_arraylongstring; v_schema metadatapkg.t_string; v_fullength pls_integer := 0; v_offset pls_integer := 0; v_length pls_integer := 0; begin SELECT DISTINCT OWNER INTO v_schema FROM all_objects@ENTORA where object_name = upper(p_name); -- get DDL v_clob := dbms_metadata.get_ddl(p_type, upper(p_name), upper(v_schema)); -- get CLOB length v_fullength := dbms_lob.GetLength(v_clob); for nIndex in 1..ceil(v_fullength / 32767) loop v_offset := v_length + 1; v_length := least(v_fullength - (nIndex - 1) * 32767, 32767); dbms_lob.read(v_clob, v_length, v_offset, v_longstrings(nIndex)); -- Remove table’s owner from DDL string: v_longstrings(nIndex) := replace( v_longstrings(nIndex), c_doublequote || user || c_doublequote || '.', '' ); -- Remove the following from DDL string: -- 1) "new line" characters (chr(10)) -- 2) leading and trailing spaces v_longstrings(nIndex) := ltrim(rtrim(replace(v_longstrings(nIndex), chr(10), ''))); end loop; -- close CLOB if (dbms_lob.isOpen(v_clob) > 0) then dbms_lob.close(v_clob); end if; return v_longstrings(1); end GetDDL; so as to remove the schema prefix that usually comes with metadata. I get a null value whenever I run this function over the database link with the following queries. select getddl( 'TABLE', 'TABLE1') from user_tables@ENTORA where table_name = 'TABLE1'; select getddl( 'TABLE', 'TABLE1') from dual@ENTORA; t_string is varchar2(30) t_longstring is varchar2(32767) and type t_ArrayLongString is table of t_longstring I would really appreciate it if any one could help. Many thanks.

    Read the article

  • Check my anagram code from a job interview in the past.

    - by Michael Dorgan
    Had the following as an interview question a while ago and choked so bad on basic syntax that I failed to advance (once the adrenalin kicks in, coding goes out the window.) Given a list of string, return a list of sets of strings that are anagrams of the input set. i.e. "dog","god", "foo" should return {"dog","god"}. Afterward, I created the code on my own as a sanity check and it's been around now for a bit. I'd welcome input on it to see if I missed anything or if I could have done it much more efficiently. Take it as a chance to improve myself and learn other techniques: void Anagram::doWork(list input, list &output) { typedef list SortType; SortType sortedInput; // sort each string and pair it with the original for(list<string>::iterator i = input.begin(); i != input.end(); ++i) { string tempString(*i); std::sort(tempString.begin(), tempString.end()); sortedInput.push_back(make_pair(*i, tempString)); } // Now step through the new sorted list for(SortType::iterator i = sortedInput.begin(); i != sortedInput.end();) { set<string> newSet; // Assume (hope) we have a match and pre-add the first. newSet.insert(i->first); // Set the secondary iterator one past the outside to prevent // matching the original SortType::iterator j = i; ++j; while(j != sortedInput.end()) { if(i->second == j->second) { // If the string matches, add it to the set and remove it // so that future searches need not worry about it newSet.insert(j->first); j = sortedInput.erase(j); } else { // else, next element ++j; } } // If size is bigger than our original push, we have a match - save it to the output if(newSet.size() > 1) { output.push_back(newSet); } // erase this element and update the iterator i = sortedInput.erase(i); } }

    Read the article

  • How to parse an XML string in TDI

    - by ongle
    I am new to TDI. I have a TDI assembly line that calls a web service (ibmdi.InvokeSoapWS) which returns the result as a string in the work attribute 'xmlString'. I then have an AttributeMap that attempts to parse the xml and extract a value (the node it seeks is a few nodes deep). var parser = system.getParser('ibmdi.SOAP'); var xmlString = work.getString('xmlString'); var entity = parser.parseRequest(xmlString); task.dump(entity); The trouble is, the parsed object does not contain an accurate representation of the XML. It contains only two attributes, the first is correctly the first node following the soap body (e.g., ns0:SomeNodeReply), the second being a node inside the first (e.g., ns0:DetailCount). And as far as I can determine, both attributes are just strings so I cannot recurse into the object graph. Below is a sample soap reply: <?xml version="1.0" encoding="utf-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <ns0:SomeNodeReply xmlns:ns0="http://xmlns.example.com/unique/default/namespace/1136581686664"> <ns0:Status> <ns0:StatusCD>000</ns0:StatusCD> <ns0:StatusDesc /> </ns0:Status> <ns0:DetailCount>1</ns0:DetailCount> <ns0:SomeDetail> <ns0:CodeA>Foo</ns0:CodeA> <ns0:CodeB>Bar</ns0:CodeB> </ns0:SomeDetail> </ns0:SomeNodeReply> </SOAP-ENV:Body> </SOAP-ENV:Envelope> And below is a sample dump of the parsed string: 19:03:23 CTGDIS003I *** Start dumping Entry 19:03:23 Operation: generic 19:03:23 Entry attributes: 19:03:23 SOAP_CALL (replace): 'ns0:SomeNodeReply' 19:03:23 ns0:DetailCount(replace): '1' 19:03:23 CTGDIS004I *** Finished dumping Entry All I really need to do is be able to parse out a value that may or may not be there, depending on the value of another node (e.g., DetailCount == 1, get CodeA otherwise return empty string). I am open to changing anything about how this works if I can extract the data into the work Entry.

    Read the article

  • Parallel programming in C#

    - by Alxandr
    I'm interested in learning about parallel programming in C#.NET (not like everything there is to know, but the basics and maybe some good-practices), therefore I've decided to reprogram an old program of mine which is called ImageSyncer. ImageSyncer is a really simple program, all it does is to scan trough a folder and find all files ending with .jpg, then it calculates the new position of the files based on the date they were taken (parsing of xif-data, or whatever it's called). After a location has been generated the program checks for any existing files at that location, and if one exist it looks at the last write-time of both the file to copy, and the file "in its way". If those are equal the file is skipped. If not a md5 checksum of both files is created and matched. If there is no match the file to be copied is given a new location to be copied to (for instance, if it was to be copied to "C:\test.jpg" it's copied to "C:\test(1).jpg" instead). The result of this operation is populated into a queue of a struct-type that contains two strings, the original file and the position to copy it to. Then that queue is iterated over untill it is empty and the files are copied. In other words there are 4 operations: 1. Scan directory for jpegs 2. Parse files for xif and generate copy-location 3. Check for file existence and if needed generate new path 4. Copy files And so I want to rewrite this program to make it paralell and be able to perform several of the operations at the same time, and I was wondering what the best way to achieve that would be. I've came up with two different models I can think of, but neither one of them might be any good at all. The first one is to parallelize the 4 steps of the old program, so that when step one is to be executed it's done on several threads, and when the entire of step 1 is finished step 2 is began. The other one (which I find more interesting because I have no idea of how to do that) is to create a sort of worker and consumer model, so when a thread is finished with step 1 another one takes over and performs step 2 at that object (or something like that). But as said, I don't know if any of these are any good solutions. Also, I don't know much about parallel programming at all. I know how to make a thread, and how to make it perform a function taking in an object as its only parameter, and I've also used the BackgroundWorker-class on one occasion, but I'm not that familiar with any of them. Any input would be appreciated.

    Read the article

  • Searching in Ruby on Rails - How do I search on each word entered and not the exact string?

    - by bgadoci
    I have built a blog application w/ ruby on rails and I am trying to implement a search feature. The blog application allows for users to tag posts. The tags are created in their own table and belong_to :post. When a tag is created, so is a record in the tag table where the name of the tag is tag_name and associated by post_id. Tags are strings. I am trying to allow a user to search for any word tag_name in any order. Here is what I mean. Lets say a particular post has a tag that is 'ruby code controller'. In my current search feature, that tag will be found if the user searches for 'ruby', 'ruby code', or 'ruby code controller'. It will not be found if the user types in 'ruby controller'. Essentially what I am saying is that I would like each word entered in the search to be searched for, not necessarily the 'string' that is entered into the search. I have been experimenting with providing multiple textfields to allow the user to type in multiple words, and also have been playing around with the code below, but can't seem to accomplish the above. I am new to ruby and rails so sorry if this is an obvious question and prior to installing a gem or plugin I thought I would check to see if there was a simple fix. Here is my code: View: /views/tags/index.html.erb <% form_tag tags_path, :method => 'get' do %> <p> <%= text_field_tag :search, params[:search], :class => "textfield-search" %> <%= submit_tag "Search", :name => nil, :class => "search-button" %> </p> <% end %> TagsController def index @tags = Tag.search(params[:search]).paginate :page => params[:page], :per_page => 5 @tagsearch = Tag.search(params[:search]) @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 100) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @tags } end end Tag Model class Tag < ActiveRecord::Base belongs_to :post validates_length_of :tag_name, :maximum=>42 validates_presence_of :tag_name def self.search(search) if search find(:all, :order => "created_at DESC", :conditions => ['tag_name LIKE ?', "%#{search}%"]) else find(:all, :order => "created_at DESC") end end end

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • ASP.NET MVC: How can I explain an invalid type violation to an end-user with Html.ValidationSummary?

    - by Terminal Frost
    Serious n00b warning here; please take mercy! So I finished the Nerd Dinner MVC Tutorial and I'm now in the process of converting a VB.NET application to ASP.NET MVC using the Nerd Dinner program as a sort of rough template. I am using the "IsValid / GetRuleViolations()" pattern to identify invalid user input or values that violate business rules. I am using LINQ to SQL and am taking advantage of the "OnValidate()" hook that allows me to run the validation and throw an application exception upon trying to save changes to the database via the CustomerRepository class. Anyway, everything works well, except that by the time the form values reach my validation method invalid types have already been converted to a default or existing value. (I have a "StreetNumber" property that is an integer, though I imagine this would be a problem for DateTime or any other non-strings as well.) Now, I am guessing that the UpdateModel() method throws an exception and then alters the value because the Html.ValidationMessage is displayed next to the StreetNumber field but my validation method never sees the original input. There are two problems with this: While the Html.ValidationMessage does signal that something is wrong, there is no corresponding entry in the Html.ValidationSummary. If I could even get the exception message to show up there indicating an invalid cast or something that would be better than nothing. My validation method which resides in my Customer partial class never sees the original user input so I do not know if the problem is a missing entry or an invalid type. I can't figure out how I can keep my validation logic nice and neat in one place and still get access to the form values. I could of course write some logic in the View that processes the user input, however that seems like the exact opposite of what I should be doing with MVC. Do I need a new validation pattern or is there some way to pass the original form values to my model class for processing? CustomerController Code // POST: /Customers/Edit/[id] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, FormCollection formValues) { Customer customer = customerRepository.GetCustomer(id); try { UpdateModel(customer); customerRepository.Save(); return RedirectToAction("Details", new { id = customer.AccountID }); } catch { foreach (var issue in customer.GetRuleViolations()) ModelState.AddModelError(issue.PropertyName, issue.ErrorMessage); } return View(customer); }

    Read the article

  • Google Code + SVN or GitHub + Git

    - by Nazgulled
    Let me start by telling you that I never used anything besides SVN and I'm also a Windows user. I have a couple of simple projects that are open-source, others are on there way when I'm happy enough to release their source code but either way, I was thinking of using Google Code and SVN to share the source code of my projects instead of providing a link to the source on my website. This as always been a pain cause I had to update the binaries and the code every time I released a new version. This would also help me out to have a backup of my code some where instead of just my local machine (I used to have a local Subversion server running). What I want from a service like this is very simple... I just want a place to store my source code that people can download if they want, allows me to control revisions and provide a simple and easy issue system so people can submit bugs and stuff like that. I guess both of them have this. But I don't want to host any binaries in their websites, I want this to be hosted on my website so I can control download statistics with my own scripts, I also don't have the need for wiki pages as I prefer to have all the documentation in my own website. Does anyone of this services provide a way to "disable" features like wiki and downloads and don't show them at all for my project(s)? Now, I'm sure there are lots of pros and cons about using Google Code with SVN and GitHub with Git (of course) but here's what it's important for me on each one and why I like them: Google Code: As with any Google page, the complexity is almost non-existent Everyone (or almost) as a Google account and this is nice if people want to report problems using the issues system GitHub: May (or may not) be a little more complex (not a problem for me though) than Google's pages but... ...has a much prettier interface than Google's service It needs people to be registered on GitHub to post about issues I like the fact that with Git, you have your own revisions locally (can I use TortoiseGit for this or?) Basically that's it, not much I know... What other, most common, pros and cons can you tell me about each site/software? Keep in mind that my projects are simple, I'm probably the only one who will ever develop these projects on these repositories (or maybe not, for now I will)

    Read the article

  • Android library to get pitch from WAV file

    - by Sakura
    I have a list of sampled data from the WAV file. I would like to pass in these values into a library and get the frequency of the music played in the WAV file. For now, I will have 1 frequency in the WAV file and I would like to find a library that is compatible with Android. I understand that I need to use FFT to get the frequency domain. Is there any good libraries for that? I found that [KissFFT][1] is quite popular but I am not very sure how compatible it is on Android. Is there an easier and good library that can perform the task I want? EDIT: I tried to use JTransforms to get the FFT of the WAV file but always failed at getting the correct frequency of the file. Currently, the WAV file contains sine curve of 440Hz, music note A4. However, I got the result as 441. Then I tried to get the frequency of G4, I got the result as 882Hz which is incorrect. The frequency of G4 is supposed to be 783Hz. Could it be due to not enough samples? If yes, how much samples should I take? //DFT DoubleFFT_1D fft = new DoubleFFT_1D(numOfFrames); double max_fftval = -1; int max_i = -1; double[] fftData = new double[numOfFrames * 2]; for (int i = 0; i < numOfFrames; i++) { // copying audio data to the fft data buffer, imaginary part is 0 fftData[2 * i] = buffer[i]; fftData[2 * i + 1] = 0; } fft.complexForward(fftData); for (int i = 0; i < fftData.length; i += 2) { // complex numbers -> vectors, so we compute the length of the vector, which is sqrt(realpart^2+imaginarypart^2) double vlen = Math.sqrt((fftData[i] * fftData[i]) + (fftData[i + 1] * fftData[i + 1])); //fd.append(Double.toString(vlen)); // fd.append(","); if (max_fftval < vlen) { // if this length is bigger than our stored biggest length max_fftval = vlen; max_i = i; } } //double dominantFreq = ((double)max_i / fftData.length) * sampleRate; double dominantFreq = (max_i/2.0) * sampleRate / numOfFrames; fd.append(Double.toString(dominantFreq)); Can someone help me out? EDIT2: I manage to fix the problem mentioned above by increasing the number of samples to 100000, however, sometimes I am getting the overtones as the frequency. Any idea how to fix it? Should I use Harmonic Product Frequency or Autocorrelation algorithms?

    Read the article

  • output with "Private`" Content in Mathematica Package

    - by madalina
    Hello everyone, I am trying to solve the following implementation problem in Mathematica 7.0 for some days now and I do not understand exactly what is happening so I hope someone can give me some hints. I have 3 functions that I implemented in Mathematica in a source file with extension *.nb. They are working okay to all the examples. Now I want to put these functions into 3 different packages. So I created three different packages with extension .*m in which I put all the desired Mathematica function. An example in the "stereographic.m" package which contain the code: BeginPackage["stereographic`"] stereographic::usage="The package stereographic...." formEqs::usage="The function formEqs[complexBivPolyEqn..." makePoly::usage="The function makePoly[algebraicEqn] ..." getFixPolys::usage="The function..." milnorFibration::usage="The function..." Begin["Private`"] Share[]; formEqs[complex_,{m_,n_}]:=Block[{complexnew,complexnew1, realeq, imageq, expreal, expimag, polyrealF, polyimagF,s,t,u,v,a,b,c,epsilon,x,y,z}, complexnew:=complex/.{m->s+I*t,n->u+I*v}; complexnew1:=complexnew/.{s->(2 a epsilon)/(1+a^2+b^2+c^2),t->(2 b epsilon)/(1+a^2+b^2+c^2),u->(2 c epsilon)/(1+a^2+b^2+c^2),v->(- epsilon+a^2 epsilon+b^2 epsilon+c^2 epsilon)/(1+a^2+b^2+c^2)}; realeq:=ComplexExpand[Re[complexnew1]]; imageq:=ComplexExpand[Im[complexnew1]]; expreal:=makePoly[realeq]; expimag:=makePoly[imageq]; polyrealF:=expreal/.{a->x,b->y,c->z}; polyimagF:=expimag/.{a->x,b->y,c->z}; {polyrealF,polyimagF} ] End[] EndPackage[] Now to test the function I load the package Needs["stereographic`"] everything is okay. But when I test the function for example with formEqs[x^2-y^2,{x,y}] I get the following ouput: {Private`epsilon^2 + 2 Private`x^2 Private`epsilon^2 + Private`x^4 Private`epsilon^2 - 6 Private`y^2 Private`epsilon^2 + 2 Private`x^2 Private`y^2 Private`epsilon^2 + Private`y^4 Private`epsilon^2 - 6 Private`z^2 Private`epsilon^2 + 2 Private`x^2 Private`z^2 Private`epsilon^2 + 2 Private`y^2 Private`z^2 Private`epsilon^2 + Private`z^4 Private`epsilon^2, 8 Private`x Private`y Private`epsilon^2 + 4 Private`z Private`epsilon^2 - 4 Private`x^2 Private`z Private`epsilon^2 - 4 Private`y^2 Private`z Private`epsilon^2 - 4 Private`z^3 Private`epsilon^2} Of course I do not understand why Private` appears in front of any local variable which I returned in the final result. I would want not to have this Private` in the computed output. Any idea or better explanations which could indicate me why this happens? Thank you very much for your help. Best wishes, madalina

    Read the article

  • using LoadControl with object initializer to create properties

    - by lloydphillips
    In the past I've used UserControls to create email templates which I can fill properties on and then use LoadControl and then RenderControl to get the html for which to use for the body text of my email. This was within asp.net webforms. I'm in the throws of building an mvc website and wanted to do something similar. I've actually considered putting this functionality in a seperate class library and am looking into how I can do this so that in my web layer I can just call EmailTemplate.SubscriptionEmail() which will then generate the html from my template with properties in relevant places (obviously there needs to be parameters for email address etc in there). I wanted to create a single Render control method for which I can pass a string to the path of the UserControl which is my template. I've come across this on the web that kind of suits my needs: public static string RenderUserControl(string path, string propertyName, object propertyValue) { Page pageHolder = new Page(); UserControl viewControl = (UserControl)pageHolder.LoadControl(path); if (propertyValue != null) { Type viewControlType = viewControl.GetType(); PropertyInfo property = viewControlType.GetProperty(propertyName); if (property != null) property.SetValue(viewControl, propertyValue, null); else { throw new Exception(string.Format( "UserControl: {0} does not have a public {1} property.", path, propertyName)); } } pageHolder.Controls.Add(viewControl); StringWriter output = new StringWriter(); HttpContext.Current.Server.Execute(pageHolder, output, false); return output.ToString(); } My issue is that my UserControl(s) may have multiple and differing properties. So SubscribeEmail may require FirstName and EmailAddress where another email template UserControl (lets call it DummyEmail) would require FirstName, EmailAddress and DateOfBirth. The method above only appears to carry one parameter for propertyName and propertyValue. I considered an array of strings that I could put the varying properties into but then I thought it'd be cool to have an object intialiser so I could call the method like this: RenderUserControl("EmailTemplates/SubscribeEmail.ascs", new object() { Firstname="Lloyd", Email="[email protected]" }) Does that make sense? I was just wondering if this is at all possible in the first place and how I'd implement it? I'm not sure if it would be possible to map the properties set on 'object' to properties on the loaded user control and if it is possible where to start in doing this? Has anyone done something like this before? Can anyone help? Lloyd

    Read the article

  • dynamic module creation

    - by intuited
    I'd like to dynamically create a module from a dictionary, and I'm wondering if adding an element to sys.modules is really the best way to do this. EG context = { a: 1, b: 2 } import types test_context_module = types.ModuleType('TestContext', 'Module created to provide a context for tests') test_context_module.__dict__.update(context) import sys sys.modules['TestContext'] = test_context_module My immediate goal in this regard is to be able to provide a context for timing test execution: import timeit timeit.Timer('a + b', 'from TestContext import *') It seems that there are other ways to do this, since the Timer constructor takes objects as well as strings. I'm still interested in learning how to do this though, since a) it has other potential applications; and b) I'm not sure exactly how to use objects with the Timer constructor; doing so may prove to be less appropriate than this approach in some circumstances. EDITS/REVELATIONS/PHOOEYS/EUREKAE: I've realized that the example code relating to running timing tests won't actually work, because import * only works at the module level, and the context in which that statement is executed is that of a function in the testit module. In other words, the globals dictionary used when executing that code is that of main, since that's where I was when I wrote the code in the interactive shell. So that rationale for figuring this out is a bit botched, but it's still a valid question. I've discovered that the code run in the first set of examples has the undesirable effect that the namespace in which the newly created module's code executes is that of the module in which it was declared, not its own module. This is like way weird, and could lead to all sorts of unexpected rattlesnakeic sketchiness. So I'm pretty sure that this is not how this sort of thing is meant to be done, if it is in fact something that the Guido doth shine upon. The similar-but-subtly-different case of dynamically loading a module from a file that is not in python's include path is quite easily accomplished using imp.load_source('NewModuleName', 'path/to/module/module_to_load.py'). This does load the module into sys.modules. However this doesn't really answer my question, because really, what if you're running python on an embedded platform with no filesystem? I'm battling a considerable case of information overload at the moment, so I could be mistaken, but there doesn't seem to be anything in the imp module that's capable of this. But the question, essentially, at this point is how to set the global (ie module) context for an object. Maybe I should ask that more specifically? And at a larger scope, how to get Python to do this while shoehorning objects into a given module?

    Read the article

  • Releasing the keyboard stops shake events. Why?

    - by Moshe
    1) How do I make a UITextField resign the keyboard and hide it? The keyboard is in a dynamically created subview whose superview looks for shake events. Resigning first responder seems to break the shake event handler. 2) how do you make the view holding the keyboard transparent, like see through glass? I have seen this done before. This part has been taken care of thanks guys. As always, code samples are appreciated. I've added my own to help explain the problem. EDIT: Basically, - (void)motionBegan:(UIEventSubtype)motion withEvent:(UIEvent *)event; gets called in my main view controller to handle shaking. When a user taps on the "edit" icon (a pen, in the bottom of the screen - not the traditional UINavigationBar edit button), the main view adds a subview to itself and animates it on to the screen using a custom animation. This subview contains a UINavigationController which holds a UITableView. The UITableView, when a cell is tapped on, loads a subview into itself. This second subview is the culprit. For some reason, a UITextField in this second subview is causing problems. When a user taps on the view, the main view will not respond to shakes unless the UITextField is active (in editing mode?). Additional info: My Motion Event Handler: - (void)motionBegan:(UIEventSubtype)motion withEvent:(UIEvent *)event { NSLog(@"%@", [event description]); SystemSoundID SoundID; NSString *soundFile = [[NSBundle mainBundle] pathForResource:@"shake" ofType:@"aif"]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:soundFile], &SoundID); AudioServicesPlayAlertSound(SoundID); [self genRandom:TRUE]; } The genRandom: Method: /* Generate random label and apply it */ -(void)genRandom:(BOOL)deviceWasShaken{ if(deviceWasShaken == TRUE){ decisionText.text = [NSString stringWithFormat: (@"%@", [shakeReplies objectAtIndex:(arc4random() % [shakeReplies count])])]; }else{ SystemSoundID SoundID; NSString *soundFile = [[NSBundle mainBundle] pathForResource:@"string" ofType:@"aif"]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:soundFile], &SoundID); AudioServicesPlayAlertSound(SoundID); decisionText.text = [NSString stringWithFormat: (@"%@", [pokeReplies objectAtIndex:(arc4random() % [pokeReplies count])])]; } } shakeReplies and pokeReplies are both NSArrays of strings. One is used for when a certain part of the screen is poked and one is for when the device is shaken. The app will randomly choose a string from the NSArray and display onscreen. For those of you who work graphically, here is a diagram of the view hierarchy: Root View -> UINavigationController -> UITableView -> Edit View -> Problem UITextfield

    Read the article

  • Perl - Calling subclass constructor from superclass (OO)

    - by Emmel
    This may turn out to be an embarrassingly stupid question, but better than potentially creating embarrassingly stupid code. :-) This is an OO design question, really. Let's say I have an object class 'Foos' that represents a set of dynamic configuration elements, which are obtained by querying a command on disk, 'mycrazyfoos -getconfig'. Let's say that there are two categories of behavior that I want 'Foos' objects to have: Existing ones: one is, query ones that exist in the command output I just mentioned (/usr/bin/mycrazyfoos -getconfig`. Make modifications to existing ones via shelling out commands. Create new ones that don't exist; new 'crazyfoos', using a complex set of /usr/bin/mycrazyfoos commands and parameters. Here I'm not really just querying, but actually running a bunch of system() commands. Affecting changes. Here's my class structure: Foos.pm package Foos, which has a new($hashref-{name = 'myfooname',) constructor that takes a 'crazyfoo NAME' and then queries the existence of that NAME to see if it already exists (by shelling out and running the mycrazyfoos command above). If that crazyfoo already exists, return a Foos::Existing object. Any changes to this object requires shelling out, running commands and getting confirmation that everything ran okay. If this is the way to go, then the new() constructor needs to have a test to see which subclass constructor to use (if that even makes sense in this context). Here are the subclasses: Foos/Existing.pm As mentioned above, this is for when a Foos object already exists. Foos/Pending.pm This is an object that will be created if, in the above, the 'crazyfoo NAME' doesn't actually exist. In this case, the new() constructor above will be checked for additional parameters, and it will go ahead and, when called using -create() shell out using system() and create a new object... possibly returning an 'Existing' one... OR As I type this out, I am realizing it is perhaps it's better to have a single: (an alternative arrangement) Foos class, that has a -new() that takes just a name -create() that takes additional creation parameters -delete(), -change() and other params that affect ones that exist; that will have to just be checked dynamically. So here we are, two main directions to go with this. I'm curious which would be the more intelligent way to go.

    Read the article

  • Stepping into Ruby Meta-Programming: Generating proxy methods for multiple internal methods

    - by mstksg
    Hi all; I've multiply heard Ruby touted for its super spectacular meta-programming capabilities, and I was wondering if anyone could help me get started with this problem. I have a class that works as an "archive" of sorts, with internal methods that process and output data based on an input. However, the items in the archive in the class itself are represented and processed with integers, for performance purposes. The actual items outside of the archive are known by their string representation, which is simply number_representation.to_s(36). Because of this, I have hooked up each internal method with a "proxy method" that converts the input into the integer form that the archive recognizes, runs the internal method, and converts the output (either a single other item, or a collection of them) back into strings. The naming convention is this: internal methods are represented by _method_name; their corresponding proxy method is represented by method_name, with no leading underscore. For example: class Archive ## PROXY METHODS ## ## input: string representation of id's ## output: string representation of id's def do_something_with id result = _do_something_with id.to_i(36) return nil if result == nil return result.to_s(36) end def do_something_with_pair id_1,id_2 result = _do_something_with_pair id_1.to_i(36), id_2.to_i(36) return nil if result == nil return result.to_s(36) end def do_something_with_these ids result = _do_something_with_these ids.map { |n| n.to_i(36) } return nil if result == nil return result.to_s(36) end def get_many_from id result = _get_many_from id return nil if result == nil # no sparse arrays returned return result.map { |n| n.to_s(36) } end ## INTERNAL METHODS ## ## input: integer representation of id's ## output: integer representation of id's def _do_something_with id # does something with one integer-represented id, # returning an id represented as an integer end def do_something_with_pair id_1,id_2 # does something with two integer-represented id's, # returning an id represented as an integer end def _do_something_with_these ids # does something with multiple integer ids, # returning an id represented as an integer end def _get_many_from id # does something with one integer-represented id, # returns a collection of id's represented as integers end end There are a couple of reasons why I can't just convert them if id.class == String at the beginning of the internal methods: These internal methods are somewhat computationally-intensive recursive functions, and I don't want the overhead of checking multiple times at every step There is no way, without adding an extra parameter, to tell whether or not to re-convert at the end I want to think of this as an exercise in understanding ruby meta-programming Does anyone have any ideas? edit The solution I'd like would preferably be able to take an array of method names @@PROXY_METHODS = [:do_something_with, :do_something_with_pair, :do_something_with_these, :get_many_from] iterate through them, and in each iteration, put out the proxy method. I'm not sure what would be done with the arguments, but is there a way to test for arguments of a method? If not, then simple duck typing/analogous concept would do as well.

    Read the article

  • Stuck at being unable to print a substring no more than 4679 characters

    - by Newcoder
    I have a program that does string manipulation on very large strings (around 100K). The first step in my program is to cleanup the input string so that it only contains certain characters. Here is my method for this cleanup: public static String analyzeString (String input) { String output = null; output = input.replaceAll("[-+.^:,]",""); output = output.replaceAll("(\\r|\\n)", ""); output = output.toUpperCase(); output = output.replaceAll("[^XYZ]", ""); return output; } When i print my 'input' string of length 97498, it prints successfully. My output string after cleanup is of length 94788. I can print the size using output.length() but when I try to print this in Eclipse, output is empty and i can see in eclipse output console header. Since this is not my final program, so I ignored this and proceeded to next method that does pattern matching on this 'cleaned-up' string. Here is code for pattern matching: public static List<Integer> getIntervals(String input, String regex) { List<Integer> output = new ArrayList<Integer> (); // Do pattern matching Pattern p1 = Pattern.compile(regex); Matcher m1 = p1.matcher(input); // If match found while (m1.find()) { output.add(m1.start()); output.add(m1.end()); } return output; } Based on this program, i identify the start and end intervals of my pattern match as 12351 and 87314. I tried to print this match as output.substring(12351, 87314) and only get blank output. Numerous hit and trial runs resulted in the conclusion that biggest substring that i can print is of length 4679. If i try 4680, i again get blank input. My confusion is that if i was able to print original string (97498) length, why i couldnt print the cleaned-up string (length 94788) or the substring (length 4679). Is it due to regular expression implementation which may be causing some memory issues and my system is not able to handle that? I have 4GB installed memory.

    Read the article

  • StringBuffer wont read whole stream into a string (JAVA/Android)

    - by Levara
    Hi all! I'm making an android program that retrieves content of a webpage using HttpURLConnection. I'm new to both Java and Android. Problem is: Reader reads whole page source, but in the last while iteration it doesn't append to stringBuffer that last part. Using debbuger I have determined that, in the last loop iteration, string buff is created, but stringBuffer just doesnt append it. I need to parse retrieved content. Is there any better way to handle the content for parsing than using strings. I've read on numerous other sites that string size in Java is limited only by available heap size. I've tried with StringBuilder too. Anyone know what could be the problem. Btw feel free to suggest any improvements to the code. Thanks! URL u; try { u = new URL("http://feeds.timesonline.co.uk/c/32313/f/440134/index.rss"); HttpURLConnection c = (HttpURLConnection) u.openConnection(); c.setRequestProperty("User-agent","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727)"); c.setRequestMethod("GET"); c.setDoOutput(true); c.setReadTimeout(3000); c.connect(); StringBuffer stringBuffer = new StringBuffer(""); InputStream in = c.getInputStream(); InputStreamReader inp = new InputStreamReader(in); BufferedReader reader = new BufferedReader(inp); char[] buffer = new char[3072]; int len1 = 0; while ( (len1 = reader.read(buffer)) != -1 ) { String buff = new String(buffer,0,len1); stringBuffer.append(buff); } String stranica = new String(stringBuffer); c.disconnect(); reader.close(); inp.close(); in.close();

    Read the article

  • MVC Entity Framework: Cannot open user default database. Login failed.

    - by Michael
    This type of stuff drives me nuts. I'm having trouble finding the exact issue that I'm having, maybe I just don't know the terminology. Anyway, I had a working website using MVC and Entity Framework, but then I coded an error in a partial view page (ascx). Then all of a sudden I started to get this message. Cannot open user default database. Login failed. Login failed for user 'NT AUTHORITY\SYSTEM' I've found plenty of suggestions about opening SQL Server Management Studio, Double Click on Security, Double Click on Logins, Double click on NT AUTHORITY\SYSTEM and then double click on User Mapping. In this view I'm suppose to check the box for my database so that this user is mapped to this login. However, since I created my database in Visio Studio 2008 as part of my solution, it doesn't show up to allow me to click on it. So what do I do now? What drives me nuts is that everything was working fine. I was using my computer name to access the website and everything was working fine until the coding error. I've fix the error but still getting the error. I should also mention that this error started yesterday too around the same time but later cleared itself up. If I use localhost to access the site, it works just fine. IIS7 configuration for my website: Application Pool = DefaultAppPool Physical Path Credentials Logon = ClearText With in connection strings. I do see the one for my database in this solution. Entry Type is local metadata=res://*/Models.DataModel.csdl|res://*/Models.DataModel.ssdl|res://*/Models.DataModel.msl; provider=System.Data.SqlClient; provider connection string="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\FFBall.mdf;Integrated Security=True;User Instance=True;MultipleActiveResultSets=True" And somewhere I remember changing the identity from Network Service to LocalSystem. Because when I first stared I was getting this same message, but I changed this value and it started working. I saw that suggested somewhere too but I do not recall. Wait I remember now, I believe in IIS7, under Application Pools, DefaultAppPool identity is set to LocalSystem. Additional things I've tried. Restart the computer Recycle the application pool. Antivirus isn't running. Any help would be appreciated. Thank you in advance.

    Read the article

  • XML pass values to timer, AS3

    - by VideoDnd
    My timer has three variables that I can trace to the output window, but don't know how to pass them to the timer. How to I pass the XML values to my timer? Purpose I want to test with an XML document, before I try connecting it to an XML socket. myXML <?xml version="1.0" encoding="utf-8"?> <SESSION> <TIMER TITLE="speed">100</TIMER> <COUNT TITLE="starting position">-77777</COUNT> <FCOUNT TITLE="ramp">1000</FCOUNT> </SESSION> myFlash //myTimer 'instance of mytext on stage' /* fields I want to change with XML */ //CHANGE TO 100 var timer:Timer = new Timer(10); //CHANGE TO -77777 var count:int = 0; //CHANGE TO 1000 var fcount:int = 0; timer.addEventListener(TimerEvent.TIMER, incrementCounter); timer.start(); function incrementCounter(event:TimerEvent) { count++; fcount=int(count*count/1000);//starts out slow... then speeds up mytext.text = formatCount(fcount); } function formatCount(i:int):String { var fraction:int = i % 100; var whole:int = i / 100; return ("0000000" + whole).substr(-7, 7) + "." + (fraction < 10 ? "0" + fraction : fraction); } //LOAD XML var myXML:XML; var myLoader:URLLoader = new URLLoader(); myLoader.load(new URLRequest("time.xml")); myLoader.addEventListener(Event.COMPLETE, processXML); //PARSE XML function processXML(e:Event):void { myXML = new XML(e.target.data); trace(myXML.ROGUE.*); trace(myXML); //TEXT var text:TextField = new TextField(); text.text = myXML.TIMER.*; text.textColor = 0xFF0000; addChild(text); } RESOURCES OReilly's ActionScript 3.0 Cookbook, Chapter 12 Strings, Chapter 20 XML

    Read the article

  • UCA + Natural Sorting

    - by Alix Axel
    I recently learnt that PHP already supports the Unicode Collation Algorithm via the intl extension: $array = array ( 'al', 'be', 'Alpha', 'Beta', 'Álpha', 'Àlpha', 'Älpha', '????', 'img10.png', 'img12.png', 'img1.png', 'img2.png', ); if (extension_loaded('intl') === true) { collator_asort(collator_create('root'), $array); } Array ( [0] => al [2] => Alpha [4] => Álpha [5] => Àlpha [6] => Älpha [1] => be [3] => Beta [11] => img1.png [9] => img10.png [8] => img12.png [10] => img2.png [7] => ???? ) As you can see this seems to work perfectly, even with mixed case strings! The only drawback I've encountered so far is that there is no support for natural sorting and I'm wondering what would be the best way to work around that, so that I can merge the best of the two worlds. I've tried to specify the Collator::SORT_NUMERIC sort flag but the result is way messier: collator_asort(collator_create('root'), $array, Collator::SORT_NUMERIC); Array ( [8] => img12.png [7] => ???? [9] => img10.png [10] => img2.png [11] => img1.png [6] => Älpha [5] => Àlpha [1] => be [2] => Alpha [3] => Beta [4] => Álpha [0] => al ) However, if I run the same test with only the img*.png values I get the ideal output: Array ( [3] => img1.png [2] => img2.png [1] => img10.png [0] => img12.png ) Can anyone think of a way to preserve the Unicode sorting while adding natural sorting capabilities?

    Read the article

  • What classes should I map against with NHibernate?

    - by apollodude217
    Currently, we use NHibernate to map business objects to database tables. Said business objects enforce business rules: The set accessors will throw an exception on the spot if the contract for that property is violated. Also, the properties enforce relationships with other objects (sometimes bidirectional!). Well, whenever NHibernate loads an object from the database (e.g. when ISession.Get(id) is called), the set accessors of the mapped properties are used to put the data into the object. What's good is that the middle tier of the application enforces business logic. What's bad is that the database does not. Sometimes crap finds its way into the database. If crap is loaded into the application, it bails (throws an exception). Sometimes it clearly should bail because it cannot do anything, but what if it can continue working? E.g., an admin tool that gathers real-time reports runs a high risk of failing unnecessarily instead of allowing an admin to even fix a (potential) problem. I don't have an example on me right now, but in some instances, letting NHibernate use the "front door" properties that also enforce relationships (especially bidi) leads to bugs. What are the best solutions? Currently, I will, on a per-property basis, create a "back door" just for NHibernate: public virtual int Blah {get {return _Blah;} set {/*enforces BR's*/}} protected virtual int _Blah {get {return blah;} set {blah = value;}} private int blah; I showed the above in C# 2 (no default properties) to demonstrate how this gets us basically 3 layers of, or views, to blah!!! While this certainly works, it does not seem ideal as it requires the BL to provide one (public) interface for the app-at-large, and another (protected) interface for the data access layer. There is an additional problem: To my knowledge, NHibernate does not give you a way to distinguish between the name of the property in the BL and the name of the property in the entity model (i.e. the name you use when you query, e.g. via HQL--whenever you give NHibernate the name (string) of a property). This becomes a problem when, at first, the BR's for some property Blah are no problem, so you refer to it in your O/R mapping... but then later, you have to add some BR's that do become a problem, so then you have to change your O/R mapping to use a new _Blah property, which breaks all existing queries using "Blah" (common problem with programming against strings). Has anyone solved these problems?!

    Read the article

  • How do I create a thread-safe write-once read-many value in Java?

    - by Software Monkey
    This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. By way of example, let's say I have a class that is structured to fit into an application framework like so: public class MyClass { private /*ideally-final*/ SomeObject someObject; MyClass() { someObject=null; } public void startup() { someObject=new SomeObject(...arguments from environment which are not available until startup is called...); } public void shutdown() { someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly } } I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect it's write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization. The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so: public class WoRmObject<T> { private T object; WoRmObject() { object=null; } public WoRmObject set(T val) { object=val; return this; } public T get() { return object; } } and then in MyClass, above, do: private final WoRmObject<SomeObject> someObject; MyClass() { someObject=new WoRmObject<SomeObject>(); } public void startup() { someObject.set(SomeObject(...arguments from environment which are not available until startup is called...)); } Which raises some questions for me: Is there a better way, or existing Java object (would have to be available in Java 4)? Is this thread-safe provided that no other thread accesses someObject.get() until after it's set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this. Given the completely unsynchronized WoRmObject container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does has the JMM always guaranteed that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated.

    Read the article

  • Perl: Compare and edit underlying structure in hash

    - by Mahfuzur Rahman Pallab
    I have a hash of complex structure and I want to perform a search and replace. The first hash is like the following: $VAR1 = { abc => { 123 => ["xx", "yy", "zy"], 456 => ["ab", "cd", "ef"] }, def => { 659 => ["wx", "yg", "kl"], 456 => ["as", "sd", "df"] }, mno => { 987 => ["lk", "dm", "sd"] }, } and I want to iteratively search for all '123'/'456' elements, and if a match is found, I need to do a comparison of the sublayer, i.e. of ['ab','cd','ef'] and ['as','sd','df'] and in this case, keep only the one with ['ab','cd','ef']. So the output will be as follows: $VAR1 = { abc => { 123 => ["xx", "yy", "zy"], 456 => ["ab", "cd", "ef"] }, def => { 659 => ["wx", "yg", "kl"] }, mno => { 987 => ["lk", "dm", "sd"] }, } So the deletion is based on the substructure, and not index. How can it be done? Thanks for the help!! Lets assume that I will declare the values to be kept, i.e. I will keep 456 = ["ab", "cd", "ef"] based on a predeclared value of ["ab", "cd", "ef"] and delete any other instance of 456 anywhere else. The search has to be for every key. so the code will go through the hash, first taking 123 = ["xx", "yy", "zy"] and compare it against itself throughout the rest of the hash, if no match is found, do nothing. If a match is found, like in the case of 456 = ["ab", "cd", "ef"], it will compare the two, and as I have said that in case of a match the one with ["ab", "cd", "ef"] would be kept, it will keep 456 = ["ab", "cd", "ef"] and discard any other instances of 456 anywhere else in the hash, i.e. it will delete 456 = ["as", "sd", "df"] in this case.

    Read the article

  • Rewriting Live TCP/IP (Layer 4) (i.e. Socket Layer) Streams

    - by user213060
    I have a simple problem which I'm sure someone here has done before... I want to rewrite Layer 4 TCP/IP streams (Not lower layer individual packets or frames.) Ettercap's etterfilter command lets you perform simple live replacements of Layer 4 TCP/IP streams based on fixed strings or regexes. Example ettercap scripting code: if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "gzip")) { replace("gzip", " "); msg("whited out gzip\n"); } } if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "deflate")) { replace("deflate", " "); msg("whited out deflate\n"); } } http://ettercap.sourceforge.net/forum/viewtopic.php?t=2833 I would like to rewrite streams based on my own filter program instead of just simple string replacements. Anyone have an idea of how to do this? Is there anything other than Ettercap that can do live replacement like this, maybe as a plugin to a VPN software or something? I would like to have a configuration similar to ettercap's silent bridged sniffing configuration between two Ethernet interfaces. This way I can silently filter traffic coming from either direction with no NATing problems. Note that my filter is an application that acts as a pipe filter, similar to the design of unix command-line filters: >[eth0] <----------> [my filter] <----------> [eth1]< What I am already aware of, but are not suitable: Tun/Tap - Works at the lower packet layer, I need to work with the higher layer streams. Ettercap - I can't find any way to do replacements other than the restricted capabilities in the example above. Hooking into some VPN software? - I just can't figure out which or exactly how. libnetfilter_queue - Works with lower layer packets, not TCP/IP streams. Again, the rewriting should occur at the transport layer (Layer 4) as it does in this example, instead of a lower layer packet-based approach. Exact code will help immensely! Thanks!

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >