Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 373/976 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • OS X: How to create a user account with access to the applications that she has installed only.

    - by pmurillo
    Till now, I've been the sole and proud owner of an iMac. I've always logged in as an administrator and I've also installed a bunch of software following the standard procedure of dragging and dropping it into the Applications folder. This is about to change, as sharing it with a friend makes a lot of sense now that I only have time to use the computer in the mornings, and she can only use it at nights. I thought this was going to be really easy: Just create a user account for her and that's all, I thought. Unfortunately, this is not the case. When she logins into the computer using her username and password, she has access to the same applications I've installed. Even the same applications that I setup to automatically start when I login start for her when she does. How can I setup a user that has only access to the applications that she has installed, but none of the ones installed by other users? Many thanks

    Read the article

  • Locale number formatting issue windows 2008

    - by kris
    On a project we have multiple servers running windows 2008. The servers are using the Russian locale. We have several programs that use floating point numbers but the fractional part of the number on SOME servers is getting truncated. Through the regional settings each machine has: Locale: Russian Current Location: United States Decimal Symbol: . (period) I've tried distributing the changes through "Copy Settings" and even though the procedure works it seems like the settings aren't actually being propagated. So next I went into the registry. There is a key called "sDecimal" and in all cases on all servers the value in the key is '.' There is no difference that I can find between the servers that DO have correct decimal formatting and DO NOT. Any advice on where I can look for a problem like this?

    Read the article

  • What's the correct terminology for something that isn't quite classification nor regression?

    - by TC
    Let's say that I have a problem that is basicly classification. That is, given some input and a number of possible output classes, find the correct class for the given input. Neural networks and decision trees are some of the algorithms that may be used to solve such problems. These algorithms typically only emit a single result however: the resulting classification. Now what if I weren't only interested in one classification, but in the posterior probabilities that the input belongs to each of the classes. I.E., instead of the answer "This input belongs in class A", I want the answer "This input belongs to class A with 80%, class B with 15% and class C with 5%". My question is not on how to obtain these posterior probabilities, but rather on the correct terminology to describe the process of finding them. You could call it regression, since we are now trying to estimate a number of real valued numbers, but I am not quite sure if that's right. I feel it's not exactly classification either, it's something in between the two. Is there a word that describes the process of finding the class conditional posterior probabilities that some input belongs in each of the possible output classes? P.S. I'm not exactly sure if this question is enough of a programming question, but since it's about machine learning and machine learning generally involves a decent amount of programming, let's give it a shot.

    Read the article

  • rsync windows to linux permission denied

    - by user64908
    Using Command rsync -avzP --delete --omit-dir-times ../../ [email protected]:/var/www/mysite/ I'm getting rsync: mkstemp "/var/www/mysite/.." failed: Permission denied (13) If ext is in the www-data group should I still set all the files to be owned by user www-data? I am trying to publish the files with rsync and then set the permissions using sudo chown -R www-data doc sudo chgrp -R www-data doc but I can't even rsync because of the permission denied. The SSH works fine, the rsync too except when it tries to write over or update some of the files in /var/www Client * Windows 7 * Cygwin 1.7.16 (GNU bash, version 4.1.10(4)-release (i686-pc-cygwin)) * rsync version 3.0.9 protocol version 30 Server * Ubuntu 12.04 * Apache2 * Root Accounts [ubuntu,ext] * Groups [www-data] * sudo vigr has www-data:x:33:ubuntu,ext I have already configure this http://stackoverflow.com/questions/2124169/cwrsync-ignores-nontsec-on-windows-7 This article has also managed to confuse me http://unix.stackexchange.com/questions/41687/how-should-i-rsync-files-in-var-www-if-i-want-them-to-be-owned-by-www-data What is the right procedure?

    Read the article

  • Installing Oracle11gr2 on redhat linux

    - by KItis
    I have basic question about installing applications on linux operating system. i am going to express my issue considering oracle db installation as a example. when installing oracle database , i created a user group called dba and and user in this group called ora112. so this users is allowed to install database. so my question is if ora112 uses umaks is set to 077, then no other uses will be able to configure oracle database. why do we need to follow this practice. is it a accepted procedure in application installation on Linux. please share your experience with me. thanks in advance for looking into this issue say i install java application on this way. then no other application which belongs to different user account won't be able use java running on this computer because of this access restriction.

    Read the article

  • Activator.CreateInstance uses a huge amount of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue?

    Read the article

  • Need a piece of advice about e-mail automation in ms exchange + ms office environment

    - by be here now
    Hi, guys. I need your help in the following simple situation. I've got an MS Exchange server and some client computers running on XP with Office 2003 installed. And I've got a process I need to automate. Twice a day a known list of people sends an e-mail to a certain mailbox (let's call it manager's mailbox) - basically, an accomplishment report. After recieving letters from all of these people the mailbox owner sends and e-mail to another mailbox, meaning that a certain process is done. What I need to do is to replace this manager's mailbox with a depersonalized mailbox that will accumulate all the reports and automatically send a message after collecting all of them. I am definitely not in a "oh my God, what shold I do?" situation, and currently my imagination shows me a couple of ways to solve this problem, which I'm going to try, and I'm not ascking for a ready solution. But since I'm not experienced in Office/VBA developement, I'd like to ask a corresponding pro's opinion. Can you point me to a right direction from the best practices' point of view?

    Read the article

  • The same C# code produces different output in Visio Professional and Premium

    - by user615993
    I have built a simple conversion Add In, but its behavior is unfortunately different with the different Visio Editions (Visio 2010 Professional and Visio 2010 Premium). The Add In takes a Process-Diagram created with Shapes from Stencil_1.vss and creates a new slightly different Process-Diagram with Shapes from Stencil_2.vsd. It loops through a Visio page and for each shape founded creates a new shapes from new master shape (from Stencil_2.vsd) and drop it into the new page. Geometry, captions etc. are the same, only the shape-appearance is changed. Below is the source diagram: When I run the code into Visio 2010 Professional the swimlane shapes are drawn correctly. When I run the same code from Visio Premium the swimlane appearance and layout are mismatched: Both times i drop the SAME Shape("Swimlane" from the same stencil) into the Page with the SAME Code fragment: Visio.Master vm = swimlane_stencil.Masters.get_ItemU(@"Swimlane"); Visio.Shape TargetShape = targetPage.Drop(vm, shape_x, shape_y); How could I ensure, that the code produces any time the same (correct) output? Must I disable any (premium)features in the swimlane-shapesheet?

    Read the article

  • How do I add an Approver to SharePoint 2010?

    - by CompGeekess
    I am still new to SharePoint and am learning so much, but have came in to a few hic-ups and here is one. I want to add an approver to SharePoint 2010 who has FULL CONTROL. My manager requested that I find out where the approval request are going and redirect them to him. (I have no idea where or how to find this out). Is this possible to do on the Central Administration or must I go into each site/subsite and set him to be the approver this way? Googled and the site was showing me how to approve workflows or how to create approvals, my other resources didn't give much help either. So far I had gone into a few individual sites and set my manager and I up as approvers with full control, but am uncertain if this is the correct procedure or if there is a better way to do this. For example, have the lower levels inherit from the higher level - set security at the highest level and cascade to the child levels. Thank You.

    Read the article

  • Writing a synchronized thread-safety wrapper for NavigableMap

    - by polygenelubricants
    java.util.Collections currently provide the following utility methods for creating synchronized wrapper for various collection interfaces: synchronizedCollection(Collection<T> c) synchronizedList(List<T> list) synchronizedMap(Map<K,V> m) synchronizedSet(Set<T> s) synchronizedSortedMap(SortedMap<K,V> m) synchronizedSortedSet(SortedSet<T> s) Analogously, it also has 6 unmodifiedXXX overloads. The glaring omission here are the utility methods for NavigableMap<K,V>. It's true that it extends SortedMap, but so does SortedSet extends Set, and Set extends Collection, and Collections have dedicated utility methods for SortedSet and Set. Presumably NavigableMap is a useful abstraction, or else it wouldn't have been there in the first place, and yet there are no utility methods for it. So the questions are: Is there a specific reason why Collections doesn't provide utility methods for NavigableMap? How would you write your own synchronized wrapper for NavigableMap? Glancing at the source code for OpenJDK version of Collections.java seems to suggest that this is just a "mechanical" process Is it true that in general you can add synchronized thread-safetiness feature like this? If it's such a mechanical process, can it be automated? (Eclipse plug-in, etc) Is this code repetition necessary, or could it have been avoided by a different OOP design pattern?

    Read the article

  • Safari specific scroll bar issue in legacy code

    - by user1237169
    I am trying to debug an issue that is occurring specifically in Safari. On a few pages of a web application, the content is larger than the frame and a scroll bar appears on the right but when the scroll bar moves up or down the content does not scroll with it. So you can "scroll" the scroll bar but the content itself was inaccessible even though the scroll bar was mobile. The issue only occurs in the "Multi-Process Windows" debug mode option but not in the "Single-Process Windows" option. The scroll bar works perfectly fine in Firefox, IE, and Chrome, just not in Safari. Because there's a lot of legacy code, I'm not quite sure exactly what the actual content is and which specific html elements are relevant. From what I can tell there's an Iframe element, html element, body element, div element, iframe element html element, body element and finally some divs. edit Does it matter if some of the elements within the <iframe> have the attribute scrolling="no"? I see this on a few of the elements within iframe but my coworker reassures me they don't matter.

    Read the article

  • GetIpAddrTable() leaks memory. How to resolve that?

    - by Stabledog
    On my Windows 7 box, this simple program causes the memory use of the application to creep up continuously, with no upper bound. I've stripped out everything non-essential, and it seems clear that the culprit is the Microsoft Iphlpapi function "GetIpAddrTable()". On each call, it leaks some memory. In a loop (e.g. checking for changes to the network interface list), it is unsustainable. There seems to be no async notification API which could do this job, so now I'm faced with possibly having to isolate this logic into a separate process and recycle the process periodically -- an ugly solution. Any ideas? // IphlpLeak.cpp - demonstrates that GetIpAddrTable leaks memory internally: run this and watch // the memory use of the app climb up continuously with no upper bound. #include <stdio.h> #include <windows.h> #include <assert.h> #include <Iphlpapi.h> #pragma comment(lib,"Iphlpapi.lib") void testLeak() { static unsigned char buf[16384]; DWORD dwSize(sizeof(buf)); if (GetIpAddrTable((PMIB_IPADDRTABLE)buf, &dwSize, false) == ERROR_INSUFFICIENT_BUFFER) { assert(0); // we never hit this branch. return; } } int main(int argc, char* argv[]) { for ( int i = 0; true; i++ ) { testLeak(); printf("i=%d\n",i); Sleep(1000); } return 0; }

    Read the article

  • How to built a hackintosh with original purchased Mac lion OS? [closed]

    - by Nabayan
    Does anyone here ever built Hackintosh by themselves? Actually I'me new in this technology and trying to get hold of Mac system where I can start coding in Xcoder. I've tried in VMplayer and was able to run the Mac environment but unfortunately was not able to run X-code. I've a original MacOS downloaded few days ago and want to built a hackintosh. There is no suitable information that came across to me which give step to step full proof procedure to built a hakintosh by self. It would be really really great if anyone can guide me here. Thanks. Naba

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Recommendations for IPC between parent and child processes in .NET?

    - by Jeremy
    My .NET program needs to run an algorithm that makes heavy use of 3rd party libraries (32-bit), most of which are unmanaged code. I want to drive the CPU as hard as I can, so the code runs several threads in parallel to divide up the work. I find that running all these threads simultaneously results in temporary memory spikes, causing the process' virtual memory size to approach the 2 GB limit. This memory is released back pretty quickly, but occasionally if enough threads enter the wrong sections of code at once, the process crosses the "red line" and either the unmanaged code or the .NET code encounters an out of memory error. I can throttle back the number of threads but then my CPU usage is not as high as I would like. I am thinking of creating worker processes rather than worker threads to help avoid the out of memory errors, since doing so would give each thread of execution its own 2 GB of virtual address space (my box has lots of RAM). I am wondering what are the best/easiest methods to communicate the input and output between the processes in .NET? The file system is an obvious choice. I am used to shared memory, named pipes, and such from my UNIX background. Is there a Windows or .NET specific mechanism I should use?

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • Named pipe stalls threads?

    - by entens
    I am attempting to push updates into a process via a named pipe, but in doing so my process loop now seams to stall on while ((line = sr.ReadLine()) != null). I'm a little mystified as to what might be wrong as this is my first foray into named pipes. void RefreshThread() { using (NamedPipeServerStream pipeStream = new NamedPipeServerStream("processPipe", PipeDirection.In)) { pipeStream.WaitForConnection(); using (StreamReader sr = new StreamReader(pipeStream)) { for (; ; ) { if (StopThread == true) { StopThread = false; return; // exit loop and terminate the thread } // push update for heartbeat int HeartbeatHandle = ItemDictionary["Info.Heartbeat"]; int HeartbeatValue = (int)Config.Items[HeartbeatHandle].Value; Config.Items[HeartbeatHandle].Value = ++HeartbeatValue; SetItemValue(HeartbeatHandle, HeartbeatValue, (short)0xC0, DateTime.Now); string line = null; while ((line = sr.ReadLine()) != null) { // line is in the format: item, value, timestamp string[] parts = line.Split(','); // push update and store value in item cache int handle = ItemDictionary[parts[0]]; object value = parts[1]; Config.Items[handle].Value = int.Parse(value); DateTime timestamp = DateTime.FromBinary(long.Parse(parts[2])); SetItemValue(handle, value, (short)0xC0, timestamp); } Thread.Sleep(500); } } } }

    Read the article

  • Implement a threading to prevent UI block on a bug in an async function

    - by Marcx
    I think I ran up againt a bug in an async function... Precisely the getDirectoryListingAsync() of the File class... This method is supposted to return an object containing the lists of files in a specified folder. I found that calling this method on a direcory with a lot of files (in my tests more than 20k files), after few seconds there is a block on the UI until the process is completed... I think that this method is separated in two main block: 1) get the list of files 2) create the array with the details of the files The point 1 seems to be async (for a few second the ui is responsive), then when the process pass from point 1 to point 2 the block of the UI occurs until the complete event is dispathed... Here's some (simple) code: private function checkFiles(dir:File):void { if (dir.exists) { dir.addEventListener( FileListEvent.DIRECTORY_LISTING, listaImmaginiLocale); dir.getDirectoryListingAsync(); // after this point, for the firsts seconds the UI respond well (point 1), // few seconds later (point 2) the UI is frozen } } private function listaImmaginiLocale( event:FileListEvent ):void { // from this point on the UI is responsive again... } Actually in my projects there are some function that perform an heavy cpu usage and to prevent the UI block I implemented a simple function that after some iteration will wait giving time to UI to be refreshed. private var maxIteration:int = 150000; private function sampleFunct(offset:int = 0) :void { if (offset < maxIteration) { // do something // call the recursive function using a timeout.. // if the offset in multiple by 1000 the function will wait 15 millisec, // otherwise it will be called immediately // 1000 is a random number for the pourpose of this example, but I usually change the // value based on how much heavy is the function itself... setTimeout(function():void{aaa(++offset);}, (offset%1000?15:0)); } } Using this method I got a good responsive UI without afflicting performance... I'd like to implement it into the getDirectoryListingAsync method but I don't know if it's possibile how can I do it where is the file to edit or extend.. Any suggestion???

    Read the article

  • How do I use a URL path instead of a file path in an Open File dialog in Mac OSX or ChromiumOS?

    - by Chris
    In Windows 7 (and perhaps earlier), the default "Open File" dialog box allows you to type a full URL into the "File name" section as if it were a file path, e.g. "http://www.example.com/pic.gif" instead of "C:/windows/pictures/pic.gif". When uploading a file to a website on the client side - say, an image - this allows the client to upload a picture located on a server accessible via the URL instead of downloading the image, saving it locally, then referencing the local image in the "Open File" dialog. It's a great option for Windows users. I have three separate questions: What is this procedure formally called? How do I describe this succinctly so that my searches for more information are fruitful? Can something similar be done in Mac OSX, Chromium OS, or a Linux environment? If so, how? Thanks!

    Read the article

  • Is there a better way to keep track of session variable creation/access throughout different pages?

    - by Brandon
    Here's what I am working on. At my website I have multiple processes with each one containing multiple steps. Now in one of the processes, there is an error checking routine executed before proceeding to the next step of that process. A session var is set indicating the error status and it will either redirect back to the referrer or display the next page's contents. Now this kind of functionality, I believe, is common throughout web development. The issue that is occurring is that session vars are left around and are not being cleaned up properly. At times this introduces undesired behavior. My website is growing and I find that I am requiring more and more session vars to keep track of different system and error states. So I was thinking about creating a kind of "session variable keeper" to keep track of session var usage. The idea is fairly simple. It will have the notion of a context (e.g. registration process) and allow access to a predefined set of session vars within that context. In addition, the var and context will be paired with an action to proceed to some form of event handling. So if you haven't noticed I'm new to web development. Any thoughts or comments on the idea that I am proposing would be greatly appreciated. The back-end is written in PHP/MySQL.

    Read the article

  • Compile C++ in Visual Studio

    - by Kasun
    Hi All.. I use this method to compile C++ file in VS. But even i provide the correct file it returns false. Can any one help me... This is class called CL class CL { private const string clexe = @"cl.exe"; private const string exe = "Test.exe", file = "test.cpp"; private string args; public CL(String[] args) { this.args = String.Join(" ", args); this.args += (args.Length > 0 ? " " : "") + "/Fe" + exe + " " + file; } public Boolean Compile(String content, ref string errors) { if (File.Exists(exe)) File.Delete(exe); if (File.Exists(file)) File.Delete(file); File.WriteAllText(file, content); Process proc = new Process(); proc.StartInfo.UseShellExecute = false; proc.StartInfo.RedirectStandardOutput = true; proc.StartInfo.RedirectStandardError = true; proc.StartInfo.FileName = clexe; proc.StartInfo.Arguments = this.args; proc.StartInfo.CreateNoWindow = true; proc.Start(); //errors += proc.StandardError.ReadToEnd(); errors += proc.StandardOutput.ReadToEnd(); proc.WaitForExit(); bool success = File.Exists(exe); return success; } } This is my button click event private void button1_Click(object sender, EventArgs e) { string content = "#include <stdio.h>\nmain(){\nprintf(\"Hello world\");\n}\n"; string errors = ""; CL k = new CL(new string[] { }); if (k.Compile(content, ref errors)) Console.WriteLine("Success!"); else MessageBox.Show("Errors are : ", errors); }

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Framework or tool for "distributed unit testing"?

    - by user262646
    Is there any tool or framework able to make it easier to test distributed software written in Java? My system under test is a peer-to-peer software, and I'd like to perform testing using something like PNUnit, but with Java instead of .Net. The system under test is a framework I'm developing to build P2P applications. It uses JXTA as a lower subsystem, trying to hide some complexities of it. It's currently an academic project, so I'm pursuing simplicity at this moment. In my test, I want to demonstrate that a peer (running in its own process, possibly with multiple threads) can discover another one (running in another process or even another machine) and that they can exchange a few messages. I'm not using mocks nor stubs because I need to see both sides working simultaneously. I realize that some kind of coordination mechanism is needed, and PNUnit seems to be able to do that. I've bumped into some initiatives like Pisces, which "aims to provide a distributed testing environment that extends JUnit, giving the developer/tester an ability to run remote JUnits and create complex test suites that are composed of several remote JUnit tests running in parallel or serially", but this project and a few others I have found seem to be long dead.

    Read the article

  • Implimenting Zend MVC for my existing site-first step?

    - by Joel
    Hi guys, OK-newbie question here. I'll try not to bombard SO with lots of questions-and hopefully this first one will show me the method I'll need to follow for subsequent conversions. I have a web-based calendar system that I developed, but it was coded for me procedurally (using PHP). I'm now working on learning OO and wanting to integrate this site into my localhost Zend Framework and slowly start converting parts to OO and the Zend Framework MVC process in particular. As I've said before, I understand that this will be a slow process, and when I'm done, I still probably won't have anything as OO friendly as if I had rewritten it from scratch, but I'd like to use this as a learning experience. So, I have dropped the whole site into my localhose/zend/Public folder, and everything is showing up great and linking to the database, etc. My question is-what would be the easiest first component to switch over to the MVC model? This site has a bit of everything-forms, login, authentication, some jQuery, etc. Can anyone point to a tutorial that would address what I'm trying to do? If indeed, a form would be one of the simpler things to switch, can someone walk me through those changes? Another idea is changing over all the header info, etc? Thanks for any pointers on where to start! EDIT: Also, I understand that SO is mainly for specific coding questions-I'm happy to share specific code, once I have an idea about which section to tackle first...

    Read the article

  • How do I redirect standard output to a file in Perl? [closed]

    - by rockyurock
    I want to send standard output to the file "my_output.txt" but failed. Here's the output: inside value loop ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.16.2 port 5001 connected with 192.168.16.1 port 3189 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 5.0 sec 2.14 MBytes 3.61 Mbits/sec 0.369 ms 0/ 1528 (0%) inside value loop3 clue1 clue2 inside value loop4 one iperf completed *************************************** When I enable the local *STDOUT; in below code then I could see the above output on command prompt display (ofcourse server is sending some data): my $file = 'my_output.txt'; use Win32::Process; print"inside value loop\n"; # redirect stdout to a file #local *STDOUT; open STDOUT, '>', $file or die "can't redirect STDOUT to <$file> $!"; Win32::Process::Create(my $ProcessObj, "D:\\IOT_AUTOMATION_UTILITY\\_SATURDAY_09-04-10\\adb_cmd.bat", "adb shell /data/app/iperf -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); #$alarm_time = $IPERF_RUN_TIME+10; #20sec #$ProcessObj->Wait(40); #print"inside value loop2\n"; #sleep $alarm_time; sleep 40; $ProcessObj->Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); }

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >