Search Results

Search found 5174 results on 207 pages for 'prevent'.

Page 188/207 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • A Windows Update Prevented Live Discs From Working?

    - by user88311
    First off I'll state my system specs. Acer Aspire M1100 Windows Vista Home Basic 32bit OEM 2 Gigs of DDR2 ram 160Gig hard drive 2.7Ghz AMD Athlon 64 processor ATI Radeon X1250 Graphics card A few days ago my computer did automatic updates and updated windows defender to KB915597 (Definition 1.135.415.0), After which when shutting down and starting up I would receive BSOD with the information BUGCODE_USE_DRIVER and 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x877330000) upon where my computer would not start up with any USB devices plugged in and it always require me to run startup repair before it started. Upon when I first started it up and was able to fully boot windows, I had no use of the mouse so I was unable to install the fix that the windows solutions center brought up on my screen, so I restarted again and installed the fix hoping it would cease the problem, it did not. Upoon starting up after installing the fix and restarting I was confronted with the BSOD 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x83291000) at which I found the startup repair could not fix the problem and I restored, as I most like should have in the first place. After going through that I read that simply installing the latest defender version from the microsoft site had fixed this problem for others, so I did that, to find I still received the BSOD's. So in a attempt to find a fix to the problem I went to the microsoft answers site to try to find a way to fix the problem, there I was told to simply disable defender and reboot to see if that fixed the problem, upon doing this my computer would no longer even startup, when I boot normally I get to just when the loading screen finishes and then my computer restarts and when I run startup repair, it runs for about 15 seconds and then my computer restarts as well. I have tried running ubuntu live discs in order to simply access the drive and simply copy and paste the 2 month old physical backup I have of my C drive to the C drive, but whenever I run the live OS when it gets to the end of the load screen and is about to boot, the computer again restarts, yet if I put in a gparted disc, I am able to boot it fully, although it does not give me access to the file system just partition managing and when I attempt to access the internet through it, the computer once again restarts. So my question is, how could the update and what has happened prevent me from running the live OS's properly?

    Read the article

  • Asterisk server firewall script allows 2-way audio from incoming calls, but not on outgoing?

    - by cappie
    I'm running an Asterisk PBX on a virtual machine directly connected to the Internet and I really want to prevent script kiddies, l33t h4x0rz and actual hackers access to my server. The basic way I protect my calling-bill now is by using 32 character passwords, but I would much rather have a way to protect The firewall script I'm currently using is stated below, however, without the established connection firewall rule (mentioned rule #1), I cannot receive incoming audio from the target during outgoing calls: #!/bin/bash # first, clean up! iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P FORWARD DROP # we're not a router iptables -P OUTPUT ACCEPT # don't allow invalid connections iptables -A INPUT -m state --state INVALID -j DROP # always allow connections that are already set up (MENTIONED RULE #1) iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # always accept ICMP iptables -A INPUT -p icmp -j ACCEPT # always accept traffic on these ports #iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT # always allow DNS traffic iptables -A INPUT -p udp --sport 53 -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -j ACCEPT # allow return traffic to the PBX iptables -A INPUT -p udp -m udp --dport 50000:65536 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 10000:20000 -j ACCEPT iptables -A INPUT -p udp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -p tcp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -m multiport -p udp --dports 10000:20000 iptables -A INPUT -m multiport -p tcp --dports 10000:20000 # IP addresses of the office iptables -A INPUT -s 95.XXX.XXX.XXX/32 -j ACCEPT # accept everything from the trunk IP's iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT # accept everything on localhost iptables -A INPUT -i lo -j ACCEPT # accept all outgoing traffic iptables -A OUTPUT -j ACCEPT # DROP everything else #iptables -A INPUT -j DROP I would like to know what firewall rule I'm missing for this all to work.. There is so little documentation on which ports (incoming and outgoing) asterisk actually needs.. (return ports included). Are there any firewall/iptables specialists here that see major problems with this firewall script? It's so frustrating not being able to find a simple firewall solution that enabled me to have a PBX running somewhere on the Internet which is firewalled in such a way that it can ONLY allows connections from and to the office, the DNS servers and the trunk(s) (and only support SSH (port 22) and ICMP traffic for the outside world). Hopefully, using this question, we can solve this problem once and for all.

    Read the article

  • mod_rewrite all but two files causing loop

    - by mpounsett
    I'm trying to set up a web site to allow the creation of a semaphore file to close the site. The logic I want to follow is: when the semaphore file exists and the request is not for /style.css or /favicon.icon show the content of /closed.html I have 1 and 3 working, but my exceptions for 2 result in a processing loop when style.css or favicon.ico are requested. This is my most recent attempt: RewriteEngine on RewriteCond %{REQUEST_URI} !^/style.css RewriteCond %{REQUEST_URI} !^/favicon.ico RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] This is in a VirtualHost block, not in a Directory. There is no .htaccess file in play. I have also recently tried this, based on an answer I found elsewhere, but with the same (looping) result: RewriteCond %{REQUEST_URI} ^/style.css [OR] RewriteCond %{REQUEST_URI} ^/favicon.ico RewriteRule ^.*$ - [L] RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] I expect a request for /style.css or /favicon.ico to fail to match one of the first two rewrite conditions, which should prevent the URI from being rewritten, which should stop the mod_rewrite iteration. However, mod_rewrite seems to think the URI has been rewritten in those cases, and iterates over the rules again (and again, and again). The above works properly in all cases except for style.css or favicon.ico. In those cases I exceed the loop limits. What am I missing here to cause the rewrite iteration to stop when someone requests style.css or favicon.ico? EDIT: Here's a loglevel 9 example of what happens using the first ruleset when a request arrives for /style.css. This is just the first two iterations.. it continues to loop identically until the limit is reached. 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (1) pass through /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (1) pass through /style.css

    Read the article

  • Our Look at the Internet Explorer 9 Platform Preview

    - by Asian Angel
    Have you been hearing all about Microsoft’s work on Internet Explorer 9 and are curious about it? If you are wanting a taste of the upcoming release then join us as we take a look at the Internet Explorer 9 Platform Preview. Note: Windows Vista and Server 2008 users may need to install a Platform Update (see link at bottom for more information). Getting Started If you are curious about the systems that the platform preview will operate on here is an excerpt from the FAQ page (link provided below). There are two important points of interest here: The platform preview does not replace your regular Internet Explorer installation The platform preview (and the final version of Internet Explorer 9) will not work on Windows XP There really is not a lot to the install process…basically all that you will have to deal with is the “EULA Window” and the “Install Finished Window”. Note: The platform preview will install to a “Program Files Folder” named “Internet Explorer Platform Preview”. Internet Explorer 9 Platform Preview in Action When you start the platform preview up for the first time you will be presented with the Internet Explorer 9 Test Drive homepage. Do not be surprised that there is not a lot to the UI at this time…but you can get a good idea of how Internet Explorer will act. Note: You will not be able to alter the “Homepage” for the platform preview. Of the four menus available there are two that will be of interest to most people…the “Page & Debug Menus”. If you go to navigate to a new webpage you will need to go through the “Page Menu” unless you have installed the Address Bar Mini-Tool (shown below). Want to see what a webpage will look like in an older version of Internet Explorer? Then choose your version in the “Debug Menu”. We did find it humorous that IE6 was excluded from the choices offered. Here is what the URL entry window looks like if you are using the “Page Menu” to navigate between websites. Here is the main page of the site here displayed in “IE9 Mode”…looking good. Here is the main page viewed in “Forced IE5 Document Mode”. There were some minor differences (colors, sidebar, etc.) in how the main page displayed in comparison to “IE9 Mode”. Being able to switch between modes makes for an interesting experience… As you can see there is not much to the “Context Menu” at the moment. Notice the slightly altered icon for the platform preview… “Add” an Address Bar of Sorts If you would like to use a “make-shift” Address Bar with the platform preview you can set up the portable file (IE9browser.exe) for the Internet Explorer 9 Test Platform Addressbar Mini-Tool. Just place it in an appropriate folder, create a shortcut for it, and it will be ready to go. Here is a close look at the left side of the Address Bar Mini-Tool. You can try to access “IE Favorites” but may have sporadic results like those we experienced during our tests. Note: The Address Bar Mini-Tool will not line up perfectly with the platform preview but still makes a nice addition. And a close look at the right side of the Address Bar Mini-Tool. In order to completely shut down the Address Bar Mini-Tool you will need to click on “Close”. Each time that you enter an address into the Address Bar Mini-Tool it will open a new window/instance of the platform preview. Note: During our tests we noticed that clicking on “Home” in the “Page Menu” opened the previously viewed website but once we closed and restarted the platform preview the test drive website was the starting/home page again. Even if the platform preview is not running the Address Bar Mini-Tool can still run as shown here. Note: You will not be able to move the Address Bar Mini-Tool from its’ locked-in position at the top of the screen. Now for some fun. With just the Address Bar Mini-Tool open you can enter an address and cause the platform preview to open. Here is our example from above now open in the platform preview…good to go. Conclusion During our tests we did experience the occasional crash but overall we were pleased with the platform preview’s performance. The platform preview handled rather well and definitely seemed much quicker than Internet Explorer 8 on our test system (a definite bonus!). If you are an early adopter then this could certainly get you in the mood for the upcoming beta releases! Links Download the Internet Explorer 9 Preview Platform Download the Internet Explorer 9 Test Platform Addressbar Mini-Tool Information about Platform Update for Windows Vista & Server 2008 View the Internet Explorer 9 Platform Preview FAQ Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPMake Ctrl+Tab in Internet Explorer 7 Use Most Recent OrderRemove ISP Text or Corporate Branding from Internet Explorer Title BarWhy Can’t I Turn the Details/Preview Panes On or Off in Windows Vista Explorer?Prevent Firefox or Internet Explorer from Printing the URL on Every Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • How to Use Windows’ Advanced Search Features: Everything You Need to Know

    - by Chris Hoffman
    You should never have to hunt down a lost file on modern versions of Windows — just perform a quick search. You don’t even have to wait for a cartoon dog to find your files, like on Windows XP. The Windows search indexer is constantly running in the background to make quick local searches possible. This enables the kind of powerful search features you’d use on Google or Bing — but for your local files. Controlling the Indexer By default, the Windows search indexer watches everything under your user folder — that’s C:\Users\NAME. It reads all these files, creating an index of their names, contents, and other metadata. Whenever they change, it notices and updates its index. The index allows you to quickly find a file based on the data in the index. For example, if you want to find files that contain the word “beluga,” you can perform a search for “beluga” and you’ll get a very quick response as Windows looks up the word in its search index. If Windows didn’t use an index, you’d have to sit and wait as Windows opened every file on your hard drive, looked to see if the file contained the word “beluga,” and moved on. Most people shouldn’t have to modify this indexing behavior. However, if you store your important files in other folders — maybe you store your important data a separate partition or drive, such as at D:\Data — you may want to add these folders to your index. You can also choose which types of files you want to index, force Windows to rebuild the index entirely, pause the indexing process so it won’t use any system resources, or move the index to another location to save space on your system drive. To open the Indexing Options window, tap the Windows key on your keyboard, type “index”, and click the Indexing Options shortcut that appears. Use the Modify button to control the folders that Windows indexes or the Advanced button to control other options. To prevent Windows from indexing entirely, click the Modify button and uncheck all the included locations. You could also disable the search indexer entirely from the Programs and Features window. Searching for Files You can search for files right from your Start menu on Windows 7 or Start screen on Windows 8. Just tap the Windows key and perform a search. If you wanted to find files related to Windows, you could perform a search for “Windows.” Windows would show you files that are named Windows or contain the word Windows. From here, you can just click a file to open it. On Windows 7, files are mixed with other types of search results. On Windows 8 or 8.1, you can choose to search only for files. If you want to perform a search without leaving the desktop in Windows 8.1, press Windows Key + S to open a search sidebar. You can also initiate searches directly from Windows Explorer — that’s File Explorer on Windows 8. Just use the search box at the top-right of the window. Windows will search the location you’ve browsed to. For example, if you’re looking for a file related to Windows and know it’s somewhere in your Documents library, open the Documents library and search for Windows. Using Advanced Search Operators On Windows 7, you’ll notice that you can add “search filters” form the search box, allowing you to search by size, date modified, file type, authors, and other metadata. On Windows 8, these options are available from the Search Tools tab on the ribbon. These filters allow you to narrow your search results. If you’re a geek, you can use Windows’ Advanced Query Syntax to perform advanced searches from anywhere, including the Start menu or Start screen. Want to search for “windows,” but only bring up documents that don’t mention Microsoft? Search for “windows -microsoft”. Want to search for all pictures of penguins on your computer, whether they’re PNGs, JPEGs, or any other type of picture file? Search for “penguin kind:picture”. We’ve looked at Windows’ advanced search operators before, so check out our in-depth guide for more information. The Advanced Query Syntax gives you access to options that aren’t available in the graphical interface. Creating Saved Searches Windows allows you to take searches you’ve made and save them as a file. You can then quickly perform the search later by double-clicking the file. The file functions almost like a virtual folder that contains the files you specify. For example, let’s say you wanted to create a saved search that shows you all the new files created in your indexed folders within the last week. You could perform a search for “datecreated:this week”, then click the Save search button on the toolbar or ribbon. You’d have a new virtual folder you could quickly check to see your recent files. One of the best things about Windows search is that it’s available entirely from the keyboard. Just press the Windows key, start typing the name of the file or program you want to open, and press Enter to quickly open it. Windows 8 made this much more obnoxious with its non-unified search, but unified search is finally returning with Windows 8.1.     

    Read the article

  • These are few objective type questions which i was not able to find the solution [closed]

    - by Tarun
    1. Which of the following advantages does System.Collections.IDictionaryEnumerator provide over System.Collections.IEnumerator? a. It adds properties for direct access to both the Key and the Value b. It is optimized to handle the structure of a Dictionary. c. It provides properties to determine if the Dictionary is enumerated in Key or Value order d. It provides reverse lookup methods to distinguish a Key from a specific Value 2. When Implementing System.EnterpriseServices.ServicedComponent derived classes, which of the following statements are true? a. Enabling object pooling requires an attribute on the class and the enabling of pooling in the COM+ catalog. b. Methods can be configured to automatically mark a transaction as complete by the use of attributes. c. You can configure authentication using the AuthenticationOption when the ActivationMode is set to Library. d. You can control the lifecycle policy of an individual instance using the SetLifetimeService method. 3. Which of the following are true regarding event declaration in the code below? class Sample { event MyEventHandlerType MyEvent; } a. MyEventHandlerType must be derived from System.EventHandler or System.EventHandler<TEventArgs> b. MyEventHandlerType must take two parameters, the first of the type Object, and the second of a class derived from System.EventArgs c. MyEventHandlerType may have a non-void return type d. If MyEventHandlerType is a generic type, event declaration must use a specialization of that type. e. MyEventHandlerType cannot be declared static 4. Which of the following statements apply to developing .NET code, using .NET utilities that are available with the SDK or Visual Studio? a. Developers can create assemblies directly from the MSIL Source Code. b. Developers can examine PE header information in an assembly. c. Developers can generate XML Schemas from class definitions contained within an assembly. d. Developers can strip all meta-data from managed assemblies. e. Developers can split an assembly into multiple assemblies. 5. Which of the following characteristics do classes in the System.Drawing namespace such as Brush,Font,Pen, and Icon share? a. They encapsulate native resource and must be properly Disposed to prevent potential exhausting of resources. b. They are all MarshalByRef derived classes, but functionality across AppDomains has specific limitations. c. You can inherit from these classes to provide enhanced or customized functionality 6. Which of the following are required to be true by objects which are going to be used as keys in a System.Collections.HashTable? a. They must handle case-sensitivity identically in both the GetHashCode() and Equals() methods. b. Key objects must be immutable for the duration they are used within a HashTable. c. Get HashCode() must be overridden to provide the same result, given the same parameters, regardless of reference equalityl unless the HashTable constructor is provided with an IEqualityComparer parameter. d. Each Element in a HashTable is stored as a Key/Value pair of the type System.Collections.DictionaryElement e. All of the above 7. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. A Nullable type is a structure. c. An implicit conversion exists from any non-nullable value type to a nullable form of that type. d. An implicit conversion exists from any nullable value type to a non-nullable form of that type. e. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 8. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 9. Which of the following does using Initializer Syntax with a collection as shown below require? CollectionClass numbers = new CollectionClass { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; a. The Collection Class must implement System.Collections.Generic.ICollection<T> b. The Collection Class must implement System.Collections.Generic.IList<T> c. Each of the Items in the Initializer List will be passed to the Add<T>(T item) method d. The items in the initializer will be treated as an IEnumerable<T> and passed to the collection constructor+K110 10. What impact will using implicitly typed local variables as in the following example have? var sample = "Hello World"; a. The actual type is determined at compilation time, and has no impact on the runtime b. The actual type is determined at runtime, and late binding takes effect c. The actual type is based on the native VARIANT concept, and no binding to a specific type takes place. d. "var" itself is a specific type defined by the framework, and no special binding takes place 11. Which of the following is not supported by remoting object types? a. well-known singleton b. well-known single call c. client activated d. context-agile 12. In which of the following ways do structs differ from classes? a. Structs can not implement interfaces b. Structs cannot inherit from a base struct c. Structs cannot have events interfaces d. Structs cannot have virtual methods 13. Which of the following is not an unboxing conversion? a. void Sample1(object o) { int i = (int)o; } b. void Sample1(ValueType vt) { int i = (int)vt; } c. enum E { Hello, World} void Sample1(System.Enum et) { E e = (E) et; } d. interface I { int Value { get; set; } } void Sample1(I vt) { int i = vt.Value; } e. class C { public int Value { get; set; } } void Sample1(C vt) { int i = vt.Value; } 14. Which of the following are characteristics of the System.Threading.Timer class? a. The method provided by the TimerCallback delegate will always be invoked on the thread which created the timer. b. The thread which creates the timer must have a message processing loop (i.e. be considered a UI thread) c. The class contains protection to prevent reentrancy to the method provided by the TimerCallback delegate d. You can receive notification of an instance being Disposed by calling an overload of the Dispose method. 15. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 16. Which of the following scenarios are applicable to Window Workflow Foundation? a. Document-centric workflows b. Human workflows c. User-interface page flows d. Builtin support for communications across multiple applications and/or platforms e. All of the above 17. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 18 While using the capabilities supplied by the System.Messaging classes, which of the following are true? a. Information must be explicitly converted to/from a byte stream before it uses the MessageQueue class b. Invoking the MessageQueue.Send member defaults to using the System.Messaging.XmlMessageFormatter to serialize the object. c. Objects must be XMLSerializable in order to be transferred over a MessageQueue instance. d. The first entry in a MessageQueue must be removed from the queue before the next entry can be accessed e. Entries removed from a MessageQueue within the scope of a transaction, will be pushed back into the front of the queue if the transaction fails. 19. Which of the following are true about declarative attributes? a. They must be inherited from the System.Attribute. b. Attributes are instantiated at the same time as instances of the class to which they are applied. c. Attribute classes may be restricted to be applied only to application element types. d. By default, a given attribute may be applied multiple times to the same application element. 20. When using version 3.5 of the framework in applications which emit a dynamic code, which of the following are true? a. A Partial trust code can not emit and execute a code b. A Partial trust application must have the SecurityCriticalAttribute attribute have called Assert ReflectionEmit permission c. The generated code no more permissions than the assembly which emitted it. d. It can be executed by calling System.Reflection.Emit.DynamicMethod( string name, Type returnType, Type[] parameterTypes ) without any special permissions Within Windows Workflow Foundation, Compensating Actions are used for: a. provide a means to rollback a failed transaction b. provide a means to undo a successfully committed transaction later c. provide a means to terminate an in process transaction d. achieve load balancing by adapting to the current activity 21. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 22. Which of the following controls allows the use of XSL to transform XML content into formatted content? a. System.Web.UI.WebControls.Xml b. System.Web.UI.WebControls.Xslt c. System.Web.UI.WebControls.Substitution d. System.Web.UI.WebControls.Transform 23. To which of the following do automatic properties refer? a. You declare (explicitly or implicitly) the accessibility of the property and get and set accessors, but do not provide any implementation or backing field b. You attribute a member field so that the compiler will generate get and set accessors c. The compiler creates properties for your class based on class level attributes d. They are properties which are automatically invoked as part of the object construction process 24. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. An implicit conversion exists from any non-nullable value type to a nullable form of that type. c. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 25. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is accessible via reflection. c. The compiler generates a code that will store the information separately from the instance to ensure its security. 26. When using an implicitly typed array, which of the following is most appropriate? a. All elements in the initializer list must be of the same type. b. All elements in the initializer list must be implicitly convertible to a known type which is the actual type of at least one member in the initializer list c. All elements in the initializer list must be implicitly convertible to common type which is a base type of the items actually in the list 27. Which of the following is false about anonymous types? a. They can be derived from any reference type. b. Two anonymous types with the same named parameters in the same order declared in different classes have the same type. c. All properties of an anonymous type are read/write. 28. Which of the following are true about Extension methods. a. They can be declared either static or instance members b. They must be declared in the same assembly (but may be in different source files) c. Extension methods can be used to override existing instance methods d. Extension methods with the same signature for the same class may be declared in multiple namespaces without causing compilation errors

    Read the article

  • Parallelism in .NET – Part 13, Introducing the Task class

    - by Reed
    Once we’ve used a task-based decomposition to decompose a problem, we need a clean abstraction usable to implement the resulting decomposition.  Given that task decomposition is founded upon defining discrete tasks, .NET 4 has introduced a new API for dealing with task related issues, the aptly named Task class. The Task class is a wrapper for a delegate representing a single, discrete task within your decomposition.  We will go into various methods of construction for tasks later, but, when reduced to its fundamentals, an instance of a Task is nothing more than a wrapper around a delegate with some utility functionality added.  In order to fully understand the Task class within the new Task Parallel Library, it is important to realize that a task really is just a delegate – nothing more.  In particular, note that I never mentioned threading or parallelism in my description of a Task.  Although the Task class exists in the new System.Threading.Tasks namespace: Tasks are not directly related to threads or multithreading. Of course, Task instances will typically be used in our implementation of concurrency within an application, but the Task class itself does not provide the concurrency used.  The Task API supports using Tasks in an entirely single threaded, synchronous manner. Tasks are very much like standard delegates.  You can execute a task synchronously via Task.RunSynchronously(), or you can use Task.Start() to schedule a task to run, typically asynchronously.  This is very similar to using delegate.Invoke to execute a delegate synchronously, or using delegate.BeginInvoke to execute it asynchronously. The Task class adds some nice functionality on top of a standard delegate which improves usability in both synchronous and multithreaded environments. The first addition provided by Task is a means of handling cancellation via the new unified cancellation mechanism of .NET 4.  If the wrapped delegate within a Task raises an OperationCanceledException during it’s operation, which is typically generated via calling ThrowIfCancellationRequested on a CancellationToken, or if the CancellationToken used to construct a Task instance is flagged as canceled, the Task’s IsCanceled property will be set to true automatically.  This provides a clean way to determine whether a Task has been canceled, often without requiring specific exception handling. Tasks also provide a clean API which can be used for waiting on a task.  Although the Task class explicitly implements IAsyncResult, Tasks provide a nicer usage model than the traditional .NET Asynchronous Programming Model.  Instead of needing to track an IAsyncResult handle, you can just directly call Task.Wait() to block until a Task has completed.  Overloads exist for providing a timeout, a CancellationToken, or both to prevent waiting indefinitely.  In addition, the Task class provides static methods for waiting on multiple tasks – Task.WaitAll and Task.WaitAny, again with overloads providing time out options.  This provides a very simple, clean API for waiting on single or multiple tasks. Finally, Tasks provide a much nicer model for Exception handling.  If the delegate wrapped within a Task raises an exception, the exception will automatically get wrapped into an AggregateException and exposed via the Task.Exception property.  This exception is stored with the Task directly, and does not tear down the application.  Later, when Task.Wait() (or Task.WaitAll or Task.WaitAny) is called on this task, an AggregateException will be raised at that point if any of the tasks raised an exception.  For example, suppose we have the following code: Task taskOne = new Task( () => { throw new ApplicationException("Random Exception!"); }); Task taskTwo = new Task( () => { throw new ArgumentException("Different exception here"); }); // Start the tasks taskOne.Start(); taskTwo.Start(); try { Task.WaitAll(new[] { taskOne, taskTwo }); } catch (AggregateException e) { Console.WriteLine(e.InnerExceptions.Count); foreach (var inner in e.InnerExceptions) Console.WriteLine(inner.Message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, our routine will print: 2 Different exception here Random Exception! Note that we had two separate tasks, each of which raised two distinctly different types of exceptions.  We can handle this cleanly, with very little code, in a much nicer manner than the Asynchronous Programming API.  We no longer need to handle TargetInvocationException or worry about implementing the Event-based Asynchronous Pattern properly by setting the AsyncCompletedEventArgs.Error property.  Instead, we just raise our exception as normal, and handle AggregateException in a single location in our calling code.

    Read the article

  • Cannot Install/Start MySQL Server

    - by Peezy Bro
    Okay, I decided to migrate from MySQL Server 5.5.37 to Percona Server 5.6. I ended up removing MySQL Server by the following: sudo apt-get --purge remove mysql-server mysql-server-5.5 mysql-server-core-5.5 mysql-client mysql-client-core-5.5 mysql-common sudo apt-get autoremove sudo apt-get autoclean rm -rf /var/lib/mysql rm -rf /etc/mysql Now here is my problem, when I try to install MySQL Server 5.6 it goes through its process and when it asks me for a password, it comes up with Cannot set MySQL "root" password. After it "installs" MySQL wont start up and I get permission denied?. Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. brandon@brandon-DB:~$ sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5 Suggested packages: libmldbm-perl libnet-daemon-perl libplrpc-perl libsql-statement-perl tinyca mailx The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5 0 upgraded, 10 newly installed, 0 to remove and 35 not upgraded. Need to get 0 B/8,955 kB of archives. After this operation, 96.3 MB of additional disk space will be used. Do you want to continue? [Y/n] y Preconfiguring packages ... Selecting previously unselected package mysql-common. (Reading database ... 167760 files and directories currently installed.) Preparing to unpack .../mysql-common_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libmysqlclient18:amd64. Preparing to unpack .../libmysqlclient18_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libdbi-perl. Preparing to unpack .../libdbi-perl_1.630-1_amd64.deb ... Unpacking libdbi-perl (1.630-1) ... Selecting previously unselected package libdbd-mysql-perl. Preparing to unpack .../libdbd-mysql-perl_4.025-1_amd64.deb ... Unpacking libdbd-mysql-perl (4.025-1) ... Selecting previously unselected package libterm-readkey-perl. Preparing to unpack .../libterm-readkey-perl_2.31-1_amd64.deb ... Unpacking libterm-readkey-perl (2.31-1) ... Selecting previously unselected package mysql-client-core-5.5. Preparing to unpack .../mysql-client-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-client-5.5. Preparing to unpack .../mysql-client-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-core-5.5. Preparing to unpack .../mysql-server-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-5.5. (Reading database ... 168116 files and directories currently installed.) Preparing to unpack .../mysql-server-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server. Preparing to unpack .../mysql-server_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-server (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for ureadahead (0.100.0-16) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Setting up libdbi-perl (1.630-1) ... Setting up libdbd-mysql-perl (4.025-1) ... Setting up libterm-readkey-perl (2.31-1) ... Setting up mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing package mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin (2.19-0ubuntu6) ... No apport report written because the error message indicates its a followup error from a previous failure. Processing triggers for ureadahead (0.100.0-16) ... Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have all my database/tables dumped and on a seperate HDD. This is also a Dev Machine and not my main Production Machine. I also backed up the MySQL_Config and MySQL_Data.

    Read the article

  • Computer Networks UNISA - Chap 12 &ndash; Networking Security

    - by MarkPearl
    After reading this section you should be able to Identify security risks in LANs and WANs and design security policies that minimize risks Explain how physical security contributes to network security Discuss hardware and design based security techniques Understand methods of encryption such as SSL and IPSec, that can secure data in storage and in transit Describe how popular authentication protocols such as RADIUS< TACACS,Kerberos, PAP, CHAP, and MS-CHAP function Use network operating system techniques to provide basic security Understand wireless security protocols such as WEP, WPA and 802.11i Security Audits Before spending time and money on network security, examine your networks security risks – rate and prioritize risks. Different organizations have different levels of network security requirements. Security Risks Not all security breaches result from a manipulation of network technology – there are human factors that can play a role as well. The following categories are areas of considerations… Risks associated with People Risks associated with Transmission and Hardware Risks associated with Protocols and Software Risks associated with Internet Access An effective security policy A security policy identifies your security goals, risks, levels of authority, designated security coordinator and team members, responsibilities for each team member, and responsibilities for each employee. In addition it specifies how to address security breaches. It should not state exactly which hardware, software, architecture, or protocols will be used to ensure security, nor how hardware or software will be installed and configured. A security policy must address an organizations specific risks. to understand your risks, you should conduct a security audit that identifies vulnerabilities and rates both the severity of each threat and its likelihood of occurring. Security Policy Content Security policy content should… Policies for each category of security Explain to users what they can and cannot do and how these measures protect the networks security Should define what confidential means to the organization Response Policy A security policy should provide for a planned response in the event of a security breach. The response policy should identify the members of a response team, all of whom should clearly understand the the security policy, risks, and measures in place. Some of the roles concerned could include… Dispatcher – the person on call who first notices the breach Manager – the person who coordinates the resources necessary to solve the problem Technical Support Specialist – the person who focuses on solving the problem Public relations specialist – the person who acts as the official spokesperson for the organization Physical Security An important element in network security is restricting physical access to its components. There are various techniques for this including locking doors, security people at access points etc. You should identify the following… Which rooms contain critical systems or data and must be secured Through what means might intruders gain access to these rooms How and to what extent are authorized personnel granted access to these rooms Are authentication methods such as ID cards easy to forge etc. Security in Network Design The optimal way to prevent external security breaches from affecting you LAN is not to connect your LAN to the outside world at all. The next best protection is to restrict access at every point where your LAN connects to the rest of the world. Router Access List – can be used to filter or decline access to a portion of a network for certain devices. Intrusion Detection and Prevention While denying someone access to a section of the network is good, it is better to be able to detect when an attempt has been made and notify security personnel. This can be done using IDS (intrusion detection system) software. One drawback of IDS software is it can detect false positives – i.e. an authorized person who has forgotten his password attempts to logon. Firewalls A firewall is a specialized device, or a computer installed with specialized software, that selectively filters or blocks traffic between networks. A firewall typically involves a combination of hardware and software and may reside between two interconnected private networks. The simplest form of a firewall is a packet filtering firewall, which is a router that examines the header of every packet of data it receives to determine whether that type of packet is authorized to continue to its destination or not. Firewalls can block traffic in and out of a LAN. NOS (Network Operating System) Security Regardless of the operating system, generally every network administrator can implement basic security by restricting what users are authorized to do on a network. Some of the restrictions include things related to Logons – place, time of day, total time logged in, etc Passwords – length, characters used, etc Encryption Encryption is the use of an algorithm to scramble data into a format that can be read only by reversing the algorithm. The purpose of encryption is to keep information private. Many forms of encryption exist and new ways of cracking encryption are continually being invented. The following are some categories of encryption… Key Encryption PGP (Pretty Good Privacy) SSL (Secure Sockets Layer) SSH (Secure Shell) SCP (Secure CoPy) SFTP (Secure File Transfer Protocol) IPSec (Internet Protocol Security) For a detailed explanation on each section refer to pages 596 to 604 of textbook Authentication Protocols Authentication protocols are the rules that computers follow to accomplish authentication. Several types exist and the following are some of the common authentication protocols… RADIUS and TACACS PAP (Password Authentication Protocol) CHAP and MS-CHAP EAP (Extensible Authentication Protocol) 802.1x (EAPoL) Kerberos Wireless Network Security Wireless transmissions are particularly susceptible to eavesdropping. The following are two wireless network security protocols WEP WPA

    Read the article

  • 8 Mac System Features You Can Access in Recovery Mode

    - by Chris Hoffman
    A Mac’s Recovery Mode is for more than just reinstalling Mac OS X. You’ll find many other useful troubleshooting utilities here — you can use these even if your Mac can’t boot normally. To access Recovery Mode, restart your Mac and press and hold the Command + R keys during the boot-up process. This is one of several hidden startup options on a Mac. Reinstall Mac OS X Most people know Recovery Mode as the place you go to reinstall OS X on your Mac. Recovery Mode will download the OS X installer files from teh Intenret if you don’t have them locally, so they don’t take up space on your disk and you’ll never have to hunt for an opearign system disc. Better yet, it will download up-to-date installation files so you don’t have to spend hours installing operating system updates later. Microsoft could learn a lot from Apple here. Restore From a Time Machine Backup Instead of reinstalling OS X, you can choose to restore your Mac from a time machine backup. This is like restoring a system image on another operating system. You’ll need an external disk containing a backup image created on the current computer to do this. Browse the Web The Get Help Online link opens the Safari web browser to Apple’s documentation site. It’s not limited to Apple’s website, though — you can navigate to any website you like. This feature allows you to access and use a browser on your Mac even if it isn’t booting properly. It’s ideal for looking up troubleshooting information. Manage Your Disks The Disk Utility option opens the same Disk Utility you can access from within Mac OS X. It allows you to partition disks, format them, scan disks for problems, wipe drives, and set up drives in a RAID configuration. If you need to edit partitions from outside your operating system, you can just boot into the recovery environment — you don’t have to download a special partitioning tool and boot into it. Choose the Default Startup Disk Click the Apple menu on the bar at the top of your screen and select Startup Disk to access the Choose Startup Disk tool. Use this tool to choose your computer’s default startup disk and reboot into another operating system. For example, it’s useful if you have Windows installed alongside Mac OS X with Boot Camp. Add or Remove an EFI Firmware Password You can also add a firmware password to your Mac. This works like a BIOS password or UEFI password on a Windows or Linux PC. Click the Utilities menu on the bar at the top of your screen and select Firmware Password Utility to open this tool. Use the tool to turn on a firmware password, which will prevent your computer from starting up from a different hard disk, CD, DVD, or USB drive without the password you provide. This prevents people form booting up your Mac with an unauthorized operating system. If you’ve already enabled a firmware password, you can remove it from here. Use Network Tools to Troubleshoot Your Connection Select Utilities > Network Utility to open a network diagnostic tool. This utility provides a graphical way to view your network connection information. You can also use the netstat, ping, lookup, traceroute, whois, finger, and port scan utilities from here. These can be helpful to troubleshoot Internet connection problems. For example, the ping command can demonstrate whether you can communicate with a remote host and show you if you’re experiencing packet loss, while the traceroute command can show you where a connection is failing if you can’t connect to a remote server. Open a Terminal If you’d like to get your hands dirty, you can select Utilities > Terminal to open a terminal from here. This terminal allows you to do more advanced troubleshooting. Mac OS X uses the bash shell, just as typical Linux distributions do. Most people will just need to use the Reinstall Mac OS X option here, but there are many other tools you can benefit from. If the Recovery Mode files on your Mac are damaged or unavailable, your Mac will automatically download them from Apple so you can use the full recovery environment.

    Read the article

  • Silverlight 5 Hosting :: Features in Silverlight 5 and Release Date

    - by mbridge
    Silverlight 5 is finally announced in the Silverlight FireStarter Event on the 2nd December, 2010. This new version of Silverlight which was earlier labeled as 'Future of Microsoft Silverlight' has now come much closer to go live as the first Silverlight 5 Beta version is expected to be shipped during the early months of 2011. However for the full fledged and the final release of Silverlight 5, we have to wait many more months as the same is likely to be made available within the Q3 2011. As would have been usually expected, this latest edition would feature many new capabilities thereby extending the developer productivity to a whole new dimension of premium media experience and feature-rich business applications. It comes along with many new feature updates as well as the inclusion of new technologies to improve the standard of the Silverlight applications which are now fine-tuned to produce next generation business and media solutions that is capable to meet the requirements of the advanced web-based app development. The Silverlight 5 is all set to replace the previous fourth version which now includes more than forty new features while also dropping various deprecated elements that was prevalent earlier. It has brought around some major performance enhancements and also included better support for various other tools and technologies. Following are some of the changes that are registered to be available under the Silverlight 5 Beta edition which is scheduled to be launched during the Q1 2011. Silverlight 5 : Premium Media Experiences The media features of Silverlight 5 has seen some major enhancements with a lot of optimizations being made to deliver richer solutions. It's capability has now been extended to make things easier, faster and capable of performing the desired tasks in the most efficient manner. The Silverlight media solutions has already been a part of many companies in the recent days where various on-demand Silverlight services were featured but with the arrival of the next generation premium media solution of Silverlight 5, it is expected to register new heights of success and global user acclamation for using it with many esteemed web-based projects and media solutions. - The most happening element in the new Silverlight 5 will be its support for utilizing the GPU based hardware acceleration which is intended to lower down the CPU load to a significant extent and thereby allowing faster rendering of media contents without consuming much resources. This feature is believed to be particularly helpful for low configured machines to run full HD media content without any lagging caused due to processor load. It will hence be one great feature to revolutionize the new generation high quality media contents to be available within the web in a more efficient manner with its hardware decoded video playback capabilities. - With the inclusion of hardware video decoding to minimize the processor load, the Silverlight 5 also comes with another optimization enhancement to also reduce the power consumption level by making new methods to deal with the power-saver settings. With this optimization in effect, the computer would be automatically allowed to switch to sleep mode while no video playback is in progress and also to prevent any screensavers to popup and cause annoyances during any video playback. There would also be other power saver options which will be made available to best suit the users requirements and purpose. - The Silverlight trickplay feature is another great way to tweak any silverlight powered media content as is used for many video tutorial sites or for dealing with any sort of presentations. This feature enables the user to modify the playback speed to either slowdown or speedup during the playback durations based on the requirements without compromising on the quality of output. Normally such manipulations always makes the content's audio to go off-pitch, but the same will not be the case with TrickPlay and the audio would seamlessly progress with the video without skipping any of its part. - In addition to all of the above, the new Silverlight 5 will be featuring wireless control of all the media contents by making use of remote controllers. With the use of such remote devices, it will be easier to handle the various media playback controls thereby providing more freedom while experiencing the premium media services. Silverlight 5 : Business Application Development The application development standard has been extended with more possibilities by bringing forth new and useful technologies and also reviving the existing methods to work better than what it was used to. From the UI improvements to advanced technical aspects, the Silverlight 5 scores high on all grounds to produce great next generation business delivered applications by putting in more creativity and resourceful touch to all the apps being produced with it. - The WPF feature of Silverlight is made more effective by introducing new standards of Databinding which is intended to improve the productivity standards of the Silverlight application developer. It brings in a lot of convenience in debugging the databinding components or expressions and hence making things work in a flawless manner. Some additional features related to databinding includes that of Ancestor RelativeSource, Implicit DataTemplates and Model View ViewModel (MVVM) support with DataContextChanged event and many other new features relating it. - It now comes with a refined text and printing service which facilitates better clarity of the text rendering and also many positive changes which are being applied to the layout pattern. New supports has been added to include OpenType font, multi-column text, linked-text containers and character leading support to name a few among the available features.This also includes some important printing aspects like that of Postscript Vector Printing API which allows to program our printing tasks in a user defined way and Pivot functionality for visualization concerns of informations. - The Graphics support is the key improvements being incorporated which now enables to utilize three dimensional graphics pattern using GPU acceleration. It can manage to provide some really cool visualizations being curved to provide media contents within the business apps with also the support for full HD contents at 1080p quality. - Silverlight 5 includes the support for 64-bit operating systems and relevant browsers and is also optimized to provide better performance. It can support the background thread for the networking which can reduce the latency of the network to a considerable extent. The Out-of-Browser functionality adds the support for utilizing various libraries and also the Win32 API. It also comes with testing support with VS 2010 which is mostly an automated procedure and has also enabled increased security aspects of all the Silverlight 5 developed applications by using the improved version of group policy support.

    Read the article

  • How to Use An Antivirus Boot Disc or USB Drive to Ensure Your Computer is Clean

    - by Chris Hoffman
    If your computer is infected with malware, running an antivirus within Windows may not be enough to remove it. If your computer has a rootkit, the malware may be able to hide itself from your antivirus software. This is where bootable antivirus solutions come in. They can clean malware from outside the infected Windows system, so the malware won’t be running and interfering with the clean-up process. The Problem With Cleaning Up Malware From Within Windows Standard antivirus software runs within Windows. If your computer is infected with malware, the antivirus software will have to do battle with the malware. Antivirus software will try to stop the malware and remove it, while the malware will attempt to defend itself and shut down the antivirus. For really nasty malware, your antivirus software may not be able to fully remove it from within Windows. Rootkits, a type of malware that hides itself, can be even trickier. A rootkit could load at boot time before other Windows components and prevent Windows from seeing it, hide its processes from the task manager, and even trick antivirus applications into believing that the rootkit isn’t running. The problem here is that the malware and antivirus are both running on the computer at the same time. The antivirus is attempting to fight the malware on its home turf — the malware can put up a fight. Why You Should Use an Antivirus Boot Disc Antivirus boot discs deal with this by approaching the malware from outside Windows. You boot your computer from a CD or USB drive containing the antivirus and it loads a specialized operating system from the disc. Even if your Windows installation is completely infected with malware, the special operating system won’t have any malware running within it. This means the antivirus program can work on the Windows installation from outside it. The malware won’t be running while the antivirus tries to remove it, so the antivirus can methodically locate and remove the harmful software without it interfering. Any rootkits won’t be able to set up the tricks they use at Windows boot time to hide themselves from the rest o the operating system. The antivirus will be able to see the rootkits and remove them. These tools are often referred to as “rescue disks.” They’re meant to be used when you need to rescue a hopelessly infected system. Bootable Antivirus Options As with any type of antivirus software, you have quite a few options. Many antivirus companies offer bootable antivirus systems based on their antivirus software. These tools are generally free, even when they’re offered by companies that specialized in paid antivirus solutions. Here are a few good options: avast! Rescue Disk – We like avast! for offering a capable free antivirus with good detection rates in independent tests. avast! now offers the ability to create an antivirus boot disc or USB drive. Just navigate to the Tools -> Rescue Disk option in the avast! desktop application to create bootable media. BitDefender Rescue CD – BitDefender always seems to receive good scores in independent tests, and the BitDefender Rescue CD offers the same antivirus engine in the form of a bootable disc. Kaspersky Rescue Disk – Kaspersky also receives good scores in independent tests and offers its own antivirus boot disc. These are just a handful of options. If you prefer another antivirus for some reason — Comodo, Norton, Avira, ESET, or almost any other antivirus product — you’ll probably find that it offers its own system rescue disk. How to Use an Antivirus Boot Disc Using an antivirus boot disc or USB drive is actually pretty simple. You’ll just need to find the antivirus boot disc you want to use and burn it to disc or install it on a USB drive. You can do this part on any computer, so you can create antivirus boot media on a clean computer and then take it to an infected computer. Insert the boot media into the infected computer and then reboot. The computer should boot from the removable media and load the secure antivirus environment. (If it doesn’t, you may need to change the boot order in your BIOS or UEFI firmware.) You can then follow the instructions on your screen to scan your Windows system for malware and remove it. No malware will be running in the background while you do this. Antivirus boot discs are useful because they allow you to detect and clean malware infections from outside an infected operating system. If the operating system is severely infected, it may not be possible to remove — or even detect — all the malware from within it. Image Credit: aussiegall on Flickr     

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • The Top Ten Security Top Ten Lists

    - by Troy Kitch
    As a marketer, we're always putting together the top 3, or 5 best, or an assortment of top ten lists. So instead of going that route, I've put together my top ten security top ten lists. These are not only for security practitioners, but also for the average Joe/Jane; because who isn't concerned about security these days? Now, there might not be ten for each one of these lists, but the title works best that way. Starting with my number ten (in no particular order): 10. Top 10 Most Influential Security-Related Movies Amrit Williams pulls together a great collection of security-related movies. He asks for comments on which one made you want to get into the business. I would have to say that my most influential movie(s), that made me want to get into the business of "stopping the bad guys" would have to be the James Bond series. I grew up on James Bond movies: thwarting the bad guy and saving the world. I recall being both ecstatic and worried when Silicon Valley-themed "A View to A Kill" hit theaters: "An investigation of a horse-racing scam leads 007 to a mad industrialist who plans to create a worldwide microchip monopoly by destroying California's Silicon Valley." Yikes! 9. Top Ten Security Careers From movies that got you into the career, here’s a top 10 list of security-related careers. It starts with number then, Information Security Analyst and ends with number one, Malware Analyst. They point out the significant growth in security careers and indicate that "according to the Bureau of Labor Statistics, the field is expected to experience growth rates of 22% between 2010-2020. If you are interested in getting into the field, Oracle has many great opportunities all around the world.  8. Top 125 Network Security Tools A bit outside of the range of 10, the top 125 Network Security Tools is an important list because it includes a prioritized list of key security tools practitioners are using in the hacking community, regardless of whether they are vendor supplied or open source. The exhaustive list provides ratings, reviews, searching, and sorting. 7. Top 10 Security Practices I have to give a shout out to my alma mater, Cal Poly, SLO: Go Mustangs! They have compiled their list of top 10 practices for students and faculty to follow. Educational institutions are a common target of web based attacks and miscellaneous errors according to the 2014 Verizon Data Breach Investigations Report.    6. (ISC)2 Top 10 Safe and Secure Online Tips for Parents This list is arguably the most important list on my list. The tips were "gathered from (ISC)2 member volunteers who participate in the organization’s Safe and Secure Online program, a worldwide initiative that brings top cyber security experts into schools to teach children ages 11-14 how to protect themselves in a cyber-connected world…If you are a parent, educator or organization that would like the Safe and Secure Online presentation delivered at your local school, or would like more information about the program, please visit here.” 5. Top Ten Data Breaches of the Past 12 Months This type of list is always changing, so it's nice to have a current one here from Techrader.com. They've compiled and commented on the top breaches. It is likely that most readers here were effected in some way or another. 4. Top Ten Security Comic Books Although mostly physical security controls, I threw this one in for fun. My vote for #1 (not on the list) would be Professor X. The guy can breach confidentiality, integrity, and availability just by messing with your thoughts. 3. The IOUG Data Security Survey's Top 10+ Threats to Organizations The Independent Oracle Users Group annual survey on enterprise data security, Leaders Vs. Laggards, highlights what Oracle Database users deem as the top 12 threats to their organization. You can find a nice graph on page 9; Figure 7: Greatest Threats to Data Security. 2. The Ten Most Common Database Security Vulnerabilities Though I don't necessarily agree with all of the vulnerabilities in this order...I like a list that focuses on where two-thirds of your sensitive and regulated data resides (Source: IDC).  1. OWASP Top Ten Project The Online Web Application Security Project puts together their annual list of the 10 most critical web application security risks that organizations should be including in their overall security, business risk and compliance plans. In particular, SQL injection risks continues to rear its ugly head each year. Oracle Audit Vault and Database Firewall can help prevent SQL injection attacks and monitor database and system activity as a detective security control. Did I miss any?

    Read the article

  • SQL SERVER – Identifying guest User using Policy Based Management

    - by pinaldave
    If you are following my recent blog posts, you may have noticed that I’ve been writing a lot about Guest User in SQL Server. Here are all the blog posts which I have written on this subject: SQL SERVER – Disable Guest Account – Serious Security Issue SQL SERVER – Force Removing User from Database – Fix: Error: Could not drop login ‘test’ as the user is currently logged in SQL SERVER – Detecting guest User Permissions – guest User Access Status SQL SERVER – guest User and MSDB Database – Enable guest User on MSDB Database One of the requests I received was whether we could create a policy that would prevent users unable guest user in user databases. Well, here is a quick tutorial to answer this. Let us see how quickly we can do it. Requirements Check if the guest user is disabled in all the user-created databases. Exclude master, tempdb and msdb database for guest user validation. We will create the following conditions based on the above two requirements: If the name of the user is ‘guest’ If the user has connect (@hasDBAccess) permission in the database Check in All user databases, except: master, tempDB and msdb Once we create two conditions, we will create a policy which will validate the conditions. Condition 1: Is the User Guest? Expand the Database >> Management >> Policy Management >> Conditions Right click on the Conditions, and click on “New Condition…”. First we will create a condition where we will validate if the user name is ‘guest’, and if it’s so, then we will further validate if it has DB access. Check the image for the necessary configuration for condition: Facet: User Expression: @Name = ‘guest’ Condition 2: Does the User have DBAccess? Expand the Database >> Management >> Policy Management >> Conditions Right click on Conditions and click on “New Condition…”. Now we will validate if the user has DB access. Check the image for necessary configuration for condition: Facet: User Expression: @hasDBAccess = False Condition 3: Exclude Databases Expand the Database >> Management >> Policy Management >> Conditions Write click on Conditions and click on “New Condition…” Now we will create condition where we will validate if database name is master, tempdb or msdb and if database name is any of them, we will not validate our first one condition with them. Check the image for necessary configuration for condition: Facet: Database Expression: @Name != ‘msdb’ AND @Name != ‘tempdb’ AND @Name != ‘master’ The next step will be creating a policy which will enforce these conditions. Creating a Policy Right click on Policies and click “New Policy…” Here, we justify what condition we want to validate against what the target is. Condition: Has User DBAccess Target Database: Every Database except (master, tempdb and MSDB) Target User: Every User in Target Database with name ‘guest’ Now we have options for two evaluation modes: 1) On Demand and 2) On Schedule We will select On Demand in this example; however, you can change the mode to On Schedule through the drop down menu, and select the interval of the evaluation of the policy. Evaluate the Policies We have selected OnDemand as our policy evaluation mode. We will now evaluate by means of executing Evaluate policy. Click on Evaluate and it will give the following result: The result demonstrates that one of the databases has a policy violation. Username guest is enabled in AdventureWorks database. You can disable the guest user by running the following code in AdventureWorks database. USE AdventureWorks; REVOKE CONNECT FROM guest; Once you run above query, you can already evaluate the policy again. Notice that the policy violation is fixed now. You can change the method of the evaluation policy to On Schedule and validate policy on interval. You can check the history of the policy and detect the violation. Quiz I have created three conditions to check if the guest user has database access or not. Now I want to ask you: Is it possible to do the same with 2 conditions? If yes, HOW? If no, WHY NOT? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Policy Management

    Read the article

  • WebGL First Person Camera - Matrix issues

    - by Ryan Welsh
    I have been trying to make a WebGL FPS camera.I have all the inputs working correctly (I think) but when it comes to applying the position and rotation data to the view matrix I am a little lost. The results can be viewed here http://thistlestaffing.net/masters/camera/index.html and the code here var camera = { yaw: 0.0, pitch: 0.0, moveVelocity: 1.0, position: [0.0, 0.0, -70.0] }; var viewMatrix = mat4.create(); var rotSpeed = 0.1; camera.init = function(canvas){ var ratio = canvas.clientWidth / canvas.clientHeight; var left = -1; var right = 1; var bottom = -1.0; var top = 1.0; var near = 1.0; var far = 1000.0; mat4.frustum(projectionMatrix, left, right, bottom, top, near, far); viewMatrix = mat4.create(); mat4.rotateY(viewMatrix, viewMatrix, camera.yaw); mat4.rotateX(viewMatrix, viewMatrix, camera.pitch); mat4.translate(viewMatrix, viewMatrix, camera.position); } camera.update = function(){ viewMatrix = mat4.create(); mat4.rotateY(viewMatrix, viewMatrix, camera.yaw); mat4.rotateX(viewMatrix, viewMatrix, camera.pitch); mat4.translate(viewMatrix, viewMatrix, camera.position); } //prevent camera pitch from going above 90 and reset yaw when it goes over 360 camera.lockCamera = function(){ if(camera.pitch > 90.0){ camera.pitch = 90; } if(camera.pitch < -90){ camera.pitch = -90; } if(camera.yaw <0.0){ camera.yaw = camera.yaw + 360; } if(camera.yaw >360.0){ camera.yaw = camera.yaw - 0.0; } } camera.translateCamera = function(distance, direction){ //calculate where we are looking at in radians and add the direction we want to go in ie WASD keys var radian = glMatrix.toRadian(camera.yaw + direction); //console.log(camera.position[3], radian, distance, direction); //calc X coord camera.position[0] = camera.position[0] - Math.sin(radian) * distance; //calc Z coord camera.position[2] = camera.position [2] - Math.cos(radian) * distance; console.log(camera.position [2] - (Math.cos(radian) * distance)); } camera.rotateUp = function(distance, direction){ var radian = glMatrix.toRadian(camera.pitch + direction); //calc Y coord camera.position[1] = camera.position[1] + Math.sin(radian) * distance; } camera.moveForward = function(){ if(camera.pitch!=90 && camera.pitch!=-90){ camera.translateCamera(-camera.moveVelocity, 0.0); } camera.rotateUp(camera.moveVelocity, 0.0); } camera.moveBack = function(){ if(camera.pitch!=90 && camera.pitch!=-90){ camera.translateCamera(-camera.moveVelocity, 180.0); } camera.rotateUp(camera.moveVelocity, 180.0); } camera.moveLeft = function(){ camera.translateCamera(-camera.moveVelocity, 270.0); } camera.moveRight = function(){ camera.translateCamera(-camera.moveVelocity, 90.0); } camera.lookUp = function(){ camera.pitch = camera.pitch + rotSpeed; camera.lockCamera(); } camera.lookDown = function(){ camera.pitch = camera.pitch - rotSpeed; camera.lockCamera(); } camera.lookLeft = function(){ camera.yaw= camera.yaw - rotSpeed; camera.lockCamera(); } camera.lookRight = function(){ camera.yaw = camera.yaw + rotSpeed; camera.lockCamera(); } . If there is no problem with my camera then I am doing some matrix calculations within my draw function where a problem might be. //position cube 1 worldMatrix = mat4.create(); mvMatrix = mat4.create(); mat4.translate(worldMatrix, worldMatrix, [-20.0, 0.0, -30.0]); mat4.multiply(mvMatrix, worldMatrix, viewMatrix); setShaderMatrix(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.vertexAttribPointer(shaderProgram.attPosition, 3, gl.FLOAT, false, 8*4,0); gl.vertexAttribPointer(shaderProgram.attTexCoord, 2, gl.FLOAT, false, 8*4, 3*4); gl.vertexAttribPointer(shaderProgram.attNormal, 3, gl.FLOAT, false, 8*4, 5*4); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, myTexture); gl.uniform1i(shaderProgram.uniSampler, 0); gl.useProgram(shaderProgram); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.numItems); //position cube 2 worldMatrix = mat4.create(); mvMatrix = mat4.create(); mat4.multiply(mvMatrix, worldMatrix, viewMatrix); mat4.translate(worldMatrix, worldMatrix, [40.0, 0.0, -30.0]); setShaderMatrix(); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.numItems); //position cube 3 worldMatrix = mat4.create(); mvMatrix = mat4.create(); mat4.multiply(mvMatrix, worldMatrix, viewMatrix); mat4.translate(worldMatrix, worldMatrix, [20.0, 0.0, -100.0]); setShaderMatrix(); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.numItems); camera.update();

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • How-to filter table filter input to only allow numeric input

    - by frank.nimphius
    In a previous ADF Code Corner post, I explained how to change the table filter behavior by intercepting the query condition in a query filter. See sample #30 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html In this OTN Harvest post I explain how to prevent users from providing invalid character entries as table filter criteria to avoid problems upon re-querying the table. In the example shown next, only numeric values are allowed for a table column filter. To create a table that allows data filtering, drag a View Object – or a data collection of a Web Service or JPA business service – from the DataControls panel and drop it as a table. Choose the Enable Filtering option in the Edit Table Columns dialog so the table renders with the column filter boxes displayed. The table filter fields are created using implicit af:inputText components that need to be customized for you to apply a custom filter input component, or to change the input behavior. To change the input filter, so only a defined set of input keys is allowed, you need to change the default filter field with your own af:inputText field to which you apply an af:clientListener tag that filters user keyboard entries. For this, in the Oracle JDeveloper visual editor, select the column which filter you want to change and expand the column node in the Oracle JDeveloper Structure Window. Part of the column definition is the Column facet node. Expand the facets so you see the filter facet entry. The filter facet is grayed out as there is no custom facet defined. In a next step, open theComponent Palette (ctrl+shift+P) and drag an Input Text component onto the facet. This demarks the first part in the filter customization. To make the custom filter component work, you need to map the af:inputText component value property to the ADF filter criteria that is exposed in the Expression Builder. Open the Expression Builder for the filter input component value property by clicking the arrow icon to its right. In the Expression Builder expand the JSP Objects | vs | filterCriteria node to select the attribute name represented by the table column. The vs entry is the name of a variable that is defined on the table and that grants you access to the table attributes. Now that the filter works as before – though using a custom filter input component – you can add the af:clientListener tag to your custom filter component – af:inputText – to call out to JavaScript when users type in the column filter field Point the client filter method property to a JavaScript function that you reference or add through using the af:resource tag and set the type property value to keyDown. <af:document id="d1">     <af:resource type="javascript" source="/js/filterHandler.js"/> … The filter definition looks as shown below <af:inputText label="Label 1" id="it1"                         value="#{vs.filterCriteria.Employe        <af:clientListener method="suppressCharacterInput"                                     type="keyDown"/> </af:inputText> The JavaScript code that you can use to either filter character inputs or numeric inputs is shown below. Just store this code in an external JavaScript (.js) file and reference it from the af:resource tag. //Allow numbers, cursor control keys and delete keys function suppressCharacterInput(evt) {     var _keyCode = evt.getKeyCode();     var _filterField = evt.getCurrentTarget();     var _oldValue = _filterField.getValue();     if (!((_keyCode < 57) ||(_keyCode > 96 && _keyCode < 105))) {         _filterField.setValue(_oldValue);         evt.cancel();     } } //Allow characters, cursor control keys and delete keys function suppressNumericInput(evt) {  var _keyCode = evt.getKeyCode();  var _filterField = evt.getCurrentTarget();  var _oldValue = _filterField.getValue();  //check for numbers  if ((_keyCode < 57 && _keyCode > 47) ||      (_keyCode > 96 && _keyCode < 105)){     _filterField.setValue(_oldValue);     evt.cancel();   } } But what if browsers don't allow JavaScript ? Don't worry about this. If browsers would not support JavaScript then ADF Faces as a whole would not work and you had a different problem.

    Read the article

  • Recover that Photo, Picture or File You Deleted Accidentally

    - by The Geek
    Have you ever accidentally deleted a photo on your camera, computer, USB drive, or anywhere else? What you might not know is that you can usually restore those pictures—even from your camera’s memory stick. Windows tries to prevent you from making a big mistake by providing the Recycle Bin, where deleted files hang around for a while—but unfortunately it doesn’t work for external USB drives, USB flash drives, memory sticks, or mapped drives. The great news is that this technique also works if you accidentally deleted the photo… from the camera itself. That’s what happened to me, and prompted writing this article. Restore that File or Photo using Recuva The first piece of software that you’ll want to try is called Recuva, and it’s extremely easy to use—just make sure when you are installing it, that you don’t accidentally install that stupid Yahoo! toolbar that nobody wants. Now that you’ve installed the software, and avoided an awful toolbar installation, launch the Recuva wizard and let’s start through the process of recovering those pictures you shouldn’t have deleted. The first step on the wizard page will let you tell Recuva to only search for a specific type of file, which can save a lot of time while searching, and make it easier to find what you are looking for. Next you’ll need to specify where the file was, which will obviously be up to wherever you deleted it from. Since I deleted mine from my camera’s SD card, that’s where I’m looking for it. The next page will ask you whether you want to do a Deep Scan. My recommendation is to not select this for the first scan, because usually the quick scan can find it. You can always go back and run a deep scan a second time. And now, you’ll see all of the pictures deleted from your drive, memory stick, SD card, or wherever you searched. Looks like what happened in Vegas didn’t stay in Vegas after all… If there are a really large number of results, and you know exactly when the file was created or modified, you can switch to the advanced view, where you can sort by the last modified time. This can help speed up the process quite a bit, so you don’t have to look through quite as many files. At this point, you can right-click on any filename, and choose to Recover it, and then save the files elsewhere on your drive. Awesome! Restore that File or Photo using DiskDigger If you don’t have any luck with Recuva, you can always try out DiskDigger, another excellent piece of software. I’ve tested both of these applications very thoroughly, and found that neither of them will always find the same files, so it’s best to have both of them in your toolkit. Note that DiskDigger doesn’t require installation, making it a really great tool to throw on your PC repair Flash drive. Start off by choosing the drive you want to recover from…   Now you can choose whether to do a deep scan, or a really deep scan. Just like with Recuva, you’ll probably want to select the first one first. I’ve also had much better luck with the regular scan, rather than the “dig deeper” one. If you do choose the “dig deeper” one, you’ll be able to select exactly which types of files you are looking for, though again, you should use the regular scan first. Once you’ve come up with the results, you can click on the items on the left-hand side, and see a preview on the right.  You can select one or more files, and choose to restore them. It’s pretty simple! Download DiskDigger from dmitrybrant.com Download Recuva from piriform.com Good luck recovering your deleted files! And keep in mind, DiskDigger is a totally free donationware software from a single, helpful guy… so if his software helps you recover a photo you never thought you’d see again, you might want to think about throwing him a dollar or two. Similar Articles Productive Geek Tips Stupid Geek Tricks: Undo an Accidental Move or Delete With a Keyboard ShortcutRestore Accidentally Deleted Files with RecuvaCustomize Your Welcome Picture Choices in Windows VistaAutomatically Resize Picture Attachments in Outlook 2007Resize Your Photos with Easy Thumbnails TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Optimizing collision engine bottleneck

    - by Vittorio Romeo
    Foreword: I'm aware that optimizing this bottleneck is not a necessity - the engine is already very fast. I, however, for fun and educational purposes, would love to find a way to make the engine even faster. I'm creating a general-purpose C++ 2D collision detection/response engine, with an emphasis on flexibility and speed. Here's a very basic diagram of its architecture: Basically, the main class is World, which owns (manages memory) of a ResolverBase*, a SpatialBase* and a vector<Body*>. SpatialBase is a pure virtual class which deals with broad-phase collision detection. ResolverBase is a pure virtual class which deals with collision resolution. The bodies communicate to the World::SpatialBase* with SpatialInfo objects, owned by the bodies themselves. There currenly is one spatial class: Grid : SpatialBase, which is a basic fixed 2D grid. It has it's own info class, GridInfo : SpatialInfo. Here's how its architecture looks: The Grid class owns a 2D array of Cell*. The Cell class contains two collection of (not owned) Body*: a vector<Body*> which contains all the bodies that are in the cell, and a map<int, vector<Body*>> which contains all the bodies that are in the cell, divided in groups. Bodies, in fact, have a groupId int that is used for collision groups. GridInfo objects also contain non-owning pointers to the cells the body is in. As I previously said, the engine is based on groups. Body::getGroups() returns a vector<int> of all the groups the body is part of. Body::getGroupsToCheck() returns a vector<int> of all the groups the body has to check collision against. Bodies can occupy more than a single cell. GridInfo always stores non-owning pointers to the occupied cells. After the bodies move, collision detection happens. We assume that all bodies are axis-aligned bounding boxes. How broad-phase collision detection works: Part 1: spatial info update For each Body body: Top-leftmost occupied cell and bottom-rightmost occupied cells are calculated. If they differ from the previous cells, body.gridInfo.cells is cleared, and filled with all the cells the body occupies (2D for loop from the top-leftmost cell to the bottom-rightmost cell). body is now guaranteed to know what cells it occupies. For a performance boost, it stores a pointer to every map<int, vector<Body*>> of every cell it occupies where the int is a group of body->getGroupsToCheck(). These pointers get stored in gridInfo->queries, which is simply a vector<map<int, vector<Body*>>*>. body is now guaranteed to have a pointer to every vector<Body*> of bodies of groups it needs to check collision against. These pointers are stored in gridInfo->queries. Part 2: actual collision checks For each Body body: body clears and fills a vector<Body*> bodiesToCheck, which contains all the bodies it needs to check against. Duplicates are avoided (bodies can belong to more than one group) by checking if bodiesToCheck already contains the body we're trying to add. const vector<Body*>& GridInfo::getBodiesToCheck() { bodiesToCheck.clear(); for(const auto& q : queries) for(const auto& b : *q) if(!contains(bodiesToCheck, b)) bodiesToCheck.push_back(b); return bodiesToCheck; } The GridInfo::getBodiesToCheck() method IS THE BOTTLENECK. The bodiesToCheck vector must be filled for every body update because bodies could have moved meanwhile. It also needs to prevent duplicate collision checks. The contains function simply checks if the vector already contains a body with std::find. Collision is checked and resolved for every body in bodiesToCheck. That's it. So, I've been trying to optimize this broad-phase collision detection for quite a while now. Every time I try something else than the current architecture/setup, something doesn't go as planned or I make assumption about the simulation that later are proven to be false. My question is: how can I optimize the broad-phase of my collision engine maintaining the grouped bodies approach? Is there some kind of magic C++ optimization that can be applied here? Can the architecture be redesigned in order to allow for more performance? Actual implementation: SSVSCollsion Body.h, Body.cpp World.h, World.cpp Grid.h, Grid.cpp Cell.h, Cell.cpp GridInfo.h, GridInfo.cpp

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >