Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 285/976 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • WPF Skin Skinning Security Concerns

    - by Erik Philips
    I'm really new to the WPF in the .Net Framework (get that out of the way). I'm writing an application where the interface is very customizable by simply loading .xaml (at the moment a Page element) files into a frame and then mapping the controls via names as needed. The idea is to have a community of people who are interested in making skins, skin my application however they want (much like Winamp). Now the question arises, due to my lack of Xaml knowledge, is it possible to create malicious Xaml pages that when downloaded and used could have other embedded Iframes or other elements that could have embed html or call remote webpages with malicious content? I believe this could be the case. If this is the case then I two options; either I have an automated process that can remove these types of Xaml files by checking it’s elements prior to allowing download (which I would assume would be most difficult) or have a human review them prior to download. Are there alternatives I’m unaware of that could make this whole process a lot easier?

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • How can I handle template dependencies in Template Toolkit?

    - by Smack my batch up
    My static web pages are built from a huge bunch of templates which are inter-included using Template Toolkit's "import" and "include", so page.html looks like this: [% INCLUDE top %] [% IMPORT middle %] Then top might have even more files included. I have very many of these files, and they have to be run through to create the web pages in various languages (English, French, etc., not computer languages). This is a very complicated process and when one file is updated I would like to be able to automatically remake only the necessary files, using a makefile or something similar. Are there any tools like makedepend for C files which can parse template toolkit templates and create a dependency list for use in a makefile? Or are there better ways to automate this process?

    Read the article

  • Animated Gif in form using C#

    - by Rabin
    In my project, whenever a long process in being executed, a small form is displayed with a small animated gif file. I used this.Show() to open the form and this.Close() to close the form. Following is the code that I use. public partial class PlzWaitMessage : Form { public PlzWaitMessage() { InitializeComponent(); } public void ShowSpalshSceen() { this.Show(); Application.DoEvents(); } public void CloseSpalshScreen() { this.Close(); } } When the form opens, the image file do not immediately start animating. And when it does animate, the process is usually complete or very near completion which renders the animation useless. Is there a way I can have the gif animate as soon as I load the form?

    Read the article

  • Agile and code release

    - by ring bearer
    Do you know of any agile process that is created for code releases? One of the main theme of agile is frequent releases and each company/client would have their own test/approval processes that control code releases. Most of the time these slow down the pace of "frequent releases" Currently we have a proprietary tool based workflow. The team who needs a code promotion needs to create a promotion request to one of the final UAT servers. Once this is complete, and once tests are done, certain customers, technical/non-technical managers need to approve, then it goes in to production deploy stage. Meanwhile no sprint planning meeting or anything of that sort. What is the code release process (Which is agile) that has worked for you?

    Read the article

  • Install Ubuntu on Mac OS X Mavericks, MacBook Air

    - by Unknown
    I was wondering if its okay to install Ubuntu on my Macbook Air, and if it is okay please let me know the procedure. I would prefer to do it by NOT using reFind (not sure what the name is). The following is my system specification. Hardware Overview: Model Name: MacBook Air Model Identifier: MacBookAir6,2 Processor Name: Intel Core i5 Processor Speed: 1.3 GHz Number of Processors: 1 Total Number of Cores: 2 L2 Cache (per Core): 256 KB L3 Cache: 3 MB Memory: 8 GB System Software Overview: System Version: OS X 10.9.2 (13C1021) Kernel Version: Darwin 13.1.0 Boot Volume: Macintosh HD Boot Mode: Normal MacBook Air (13-inch Mid 2013), OS X Mavericks (10.9.2)

    Read the article

  • mysql_connect() causes page to not display (WAMP)

    - by VOIDHand
    I currently have a website running MySQL and PHP in which this is all working. I've tried installing WAMPServer to be able to work on the site on my own computer, but I have been having issues trying to get the site to work correctly. HTML and PHP files work correctly (by going to http://localhost/index.php, etc.). But some of the pages display "This Webpage is not available" (in Chrome) or "Internet Explorer cannot display this webpage", which leads me to think there is an error is the server, as when the page doesn't exist, it typically dispays "Oops! This link appears to be broken" or IE's standard 404 page. Each page in my site has an include() call to set up an instance of a class to handle all database transactions. This class opens the connection to the database in its constructor. I have found commenting out the contents of the constructor will allow the page to load, although this understandably causes errors later in the page. This is the contents of the included file: class dbAccess { private $db; function __construct() { $this->db = mysql_connect("externalURL","user","password") or die ("Connection for database not found."); mysql_select_db("dbName", $this->db) or die ("Database not selected"); } ... } As a note for what's happening here. I'm attempting to use the database on my site, rather than the local machine. The actual values of externalURL, dbName, user, and password work when I plug them into my database software and the file fine of the site, so I don't think the issue is with those values themselves, but rather some setting with Apache or Wamp. The issue persists if I try a local database as well. This is an excerpt of my Apache error log for one attempt at logging into the site: [Wed Apr 14 15:32:54 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Apr 14 15:32:54 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Wed Apr 14 15:32:54 2010] [notice] Server built: Dec 10 2008 00:10:06 [Wed Apr 14 15:32:54 2010] [notice] Parent: Created child process 1756 [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Child process is running [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Acquired the start mutex. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting 64 worker threads. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting thread to listen on port 80. For the first line, isLoggedIn (from the first line) is a method of the class above. Assuming that the connection works, this error should disappear. I've tried searching for solutions online, but haven't been able to find anything to help. If you need any further information to help solve this issue, feel free to ask for it. (One more note, I don't even have Skype on this computer, so I can't see it being an issue, as this conflict seems to be the default response for any Wamp issue.) [Edit: Removed entry from the error log as it was solved as an unrelated issue]

    Read the article

  • Executing Secondary Applications

    - by JooBlow
    I have an application I am attempting to make "portable". The application contains a lot of secondary utility functions that I would like to execute on external files(from the app). I tried adding them in the build process but I didn't get any "Executables" for them(just the main one and a few others). Is there a way to get these to excute? They are basically command line utility functions to process some text files but use large files in the distribution and are also used by the main application. Thanks

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

  • IE browser doesn't close and file download popup needs focus

    - by jkranthi
    Hii all , I am trying to to click on a link after it is active which again produces a popup ( file download) after clicking. Here i have 2 problems 1) I start the code and leave it .what the code does is -after long process -it waits for the link to be active .Once the link is active it clicks on the link and a download popups opens (if everything goes well) and then it hangs there ( showing yellow flashing in the task bar which mean i have to click on the explorer for it to process whatever is next ).every time i have to click on the IE whenever the download popup appears .Is there a way to handle this or am i doing some wrong ? 2) The next problem is even if i click on the IE .the IE doesn't get close even though i write ie.close . my code is below : ## if the link is active ie.link(:text,a).click_no_wait prompt_message = "Do you want to open or save this file?" window_title = "File Download" save_dialog =WIN32OLE.new("AutoItX3.Control") save_dialog.WinGetText(window_title) save_dialog_obtained =save_dialog.WinWaitActive(window_title) save_dialog.WinKill(window_title) # end #' #some more code -normal puts statements # ie.close ie is hanging up for some strange reason ..?

    Read the article

  • A simple explanation of Naive Bayes Classification

    - by Jaggerjack
    I am finding it hard to understand the process of Naive Bayes, and I was wondering if someone could explained it with a simple step by step process in English. I understand it takes comparisons by times occurred as a probability, but I have no idea how the training data is related to the actual dataset. Please give me an explanation of what role the training set plays. I am giving a very simple example for fruits here, like banana for example training set--- round-red round-orange oblong-yellow round-red dataset---- round-red round-orange round-red round-orange oblong-yellow round-red round-orange oblong-yellow oblong-yellow round-red

    Read the article

  • svn commit is hung at start of commit

    - by jwhitlock
    I'm commiting a large changeset, including a large binary file (180 MB) over a slow VPN connection. It looks for all the world like it is stalled. How can I diagnose where it is stuck? The output is: $ svn commit -m "My commit message" Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString)` Local subversion is 1.6.9 on Linux, KDE 4.3, and svn status shows ML . L ws M ws/manage.py L ws/locales L ws/locales/ja_JP L ws/locales/ja_JP/LC_MESSAGES The process isn't using much of any resources. The server is Linux, served by Apache and mod_dav_svn, same subversion 1.6.9. I can't see any process that is handling the commit.

    Read the article

  • Integrating Jython Cpython

    - by eric.frederich
    I am about to begin a project where I will likely use PyQt or Pyside. I will need to interface with a buggy 3rd party piece of server software that provides C++ and Java APIs. The Java APIs are a lot easier to use because you get Exceptions where with the C++ libraries you get segfaults. Also, the Python bindings to the Java APIs are automatic with Jython whereas the Python bindings for the C++ APIs don't exist. So, how would a CPython PyQt client application be able to communicate with these Java APIs? How would you go about it? Would you have another separate Java process on the client that serializes / pickles objects and communicates with the PyQt process over a socket? I don't want to re-invent the wheel... is there some sort of standard interface for these types of things? Some technology I should look into? RPC, Corba, etc? Thanks, ~Eric

    Read the article

  • Why does mysqldump need to be fully pathed when called from a controller or model?

    - by Kris
    When I call mysqldump from a controller or model I need to fully path the binary, when I call it from Rake I don't need to. If I do not fully path I get a zero byte file... I can confirm both processes are run using the same user. # Works in a controller, model and Rake task system "/usr/local/mysql/bin/mysqldump -u root #{w.database_name} > #{target_file}" # Only works in a Rake task system "mysqldump -u root #{w.database_name} > #{target_file}" If I call the Rake task from the action it also fails (zero byte file). OS: Mac Ruby 1.8.6 EDIT: I use Etc.getpwuid(Process.uid).name to get the User of the current process

    Read the article

  • When debugging on WinCE, how can I set Visual Studio to always load symbol files it knows about?

    - by tsellon
    I'm debugging a WinCE, C++ program in Visual Studio across an ActiveSync connection. Every time I start the process it fails to load symbol information. However, if I right click on the module and hit 'Load Symbols' it correctly locates the symbol information without any further prompting from me. Is there a way that I can set Visual Studio to either: (a) automatically load this symbol information, or (b) automatically break the process into the debugger once it's loaded (similar to what windbg does)? I'm guessing there's a setting somewhere, but I've yet to find it. Update: I forgot to mention in the original question that I'm not debugging with the instance of Visual Studio that created the exe.

    Read the article

  • Broadcom wireless card disappeards on restart

    - by Kristian Jones
    I used 'additional drivers' to install 'Broadcom STA wireless driver' and it returns an error. WIthin jockey.log it says the following numerous times. 2011-02-14 21:24:06,945 DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: blacklisted, b43legacy: blacklisted After it returns the error the network card will work temporarily until I restart the laptop. When I restart I got to go through the procedure again of trying to activate the driver, returns an error however it works temporarily. The network card is as follows on a Dell Inspiron 1545: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] Rev 01 I have been trying to solve this myself for many hours. Any help is appreciated/

    Read the article

  • How do I get a Canon LBP 2900 printer working?

    - by elektron
    i am unable to set up my canon lbp2900 printer on ubuntu12.04 64 bit i aliened the 64 bit rpm drivers and then installed the drivers (version 2.40) and followed the procedure in the canon guide ( however i changed fifo path to //localhost:59787 (somebody suggested) instead of 59687 (as suggested by canon guide) ) intially i got the message pstocapt failed .. the drivers had put the pstocapt and backend/ccp in the lib64 directory instead of the lib directory ... i copied that to the aproprite directory under lib but still i get the error "ccp send_data error exit " i also tried with fifo path //localhost:59687 i also tried the Radu script but the drivers the script uses are version 1.90 and has conflicting packages

    Read the article

  • Recreation of MySQL DB using "mysql mydb < mydb.sql" is really slow when the table has tens of milli

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • Impersonation in asp.net, confused about implmentation when used with Active Directory & Sql Server

    - by AWC
    I have an internal website that is using integrated windows authentication and this website uses sql server & active directory queries via the System.Directory.Services namespace. To use the System.Directory.Services namespace in ASP.NET I have to run IIS under an account that has the correct privileges and importantly have impersonation set to true in the web config. If this is done then when I make a query against AD then the credentials of the wroker process (IIS) are used instead of the ASPNET account and therefore the queries will now succeed. Now if I am also using Sql Server with a connection string configured for integrated security ('Integrated Security=SSPI') then this interprets the ASP.NET impersonation to mean that I want to access the database as the windows credentials of the request no the worker process. I hope I'm wrong and that I've got the config wrong, but I don't think I have and this seems not to be inconsistent? It should be noted I'm using IIS 5.1 for development and obivously this doesn't have the concept of app-pools which I believe would resolve the problem.

    Read the article

  • IPhone Dev: Activate the view of a tabBar button

    - by Alex N Sanchez
    Hi, I have an application that has a tabBar that handles all the views. In the first view I have a login process. When that process finishes I want to go automatically to the second tabBar view without making the user to click in its respective tabBar button. All I've got until now is to highlight the button with this: myTabBar.selectedItem = [myTabBar.items objectAtIndex:1]; But I have no idea about how to bring the second view related to that button to the front, automatically. Until now the user has to press the button when it gets lighted (selected). Any idea about how to do this? Is it possible? It would be much appreciated. Thanks.

    Read the article

  • Background job with status in rails

    - by pepernik
    Hey. I would like to upload a file and then parse it. Because parsing can take up to 10min I installed delayed_job plugin and called parsing function through send_later function. I have to mention that this is an AJAX app. Imagine that you press an AJAX button that starts upload and after that the source is imported into the database. During the process I want to show the progress bar or message (importing...) and when it completes the task status changes to done. My question is: What is the best way to check for status of the process. What would you do? My idea is to have another controller actions "status" which look into the database and provide the right status.

    Read the article

  • Windows Azure worker roles: One big job or many small jobs?

    - by Ryan Elkins
    Is there any inherent advantage when using multiple workers to process pieces of procedural code versus processing the entire load? In other words, if my workflow looks like this: Get work from queue0 and do A Store result from A in queue1 Get result from queue 1 and do B Store result from B in queue2 Get result from queue2 and do C Is there an inherent advantage to using 3 workers who each do the entire process themselves versus 3 workers that each do a part of the work (Worker 1 does 1 & 2, worker 2 does 3 & 4, worker 3 does 5). If we only care about working being done (finished with step 5) it would seem that it scales the same way (once you're using at least 3 workers). Maybe the big job is better because workers with that setup have less bottleneck issues?

    Read the article

  • Will more CPUs/cores help with VS.NET build times?

    - by LoveMeSomeCode
    I was wondering if anyone knew whether Visual Studio .NET had a parallel build process or not? I have a solution with lots of projects, every project has lots of markup/code, lots of types, etc. Just sitting there with intellisense on runs it up to about 700MB. But the build times are really slow and only seem to max out one of my two cpu cores. Does this mean the build process is single threaded? My solution's build dependency chain isn't linear, so I don't see why it couldn't be building some of the projects in parallel. I remember Joel Spolsky blogging about his new SSD, and how it didn't help with compile times, but he didn't mention which compiler he was using. We're using VS 2005. Anyone know how it's compilation works? And is it any different/better in 2008/2010?

    Read the article

  • Having difficulty in mapreduce to understand

    - by mahesh
    Hi all, I have seen the below link which is of getting started mapreduce with python http://code.google.com/p/appengine-mapreduce/wiki/GettingStartedInPython But still I am not able to understand how its working. I am executing below code but not able to understand what exactly is happening? mapreduce.yaml mapreduce: - name: Testmapper mapper: input_reader: mapreduce.input_readers.DatastoreInputReader handler: main.process params: - name: entity_kind default: main.userDetail mapreduce/main.py #some code class userDetail(db.Model): name = db.StringProperty() #some code def process(u): u.name="mahesh" yield op.db.Put(u) I am executing this and it gives me status = success in status page. But not able to understand what happend The main thing I want do with mapreduce is to search or count records from entity So anyone can please help me?? Thanks in advance

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >