Search Results

Search found 27745 results on 1110 pages for 'ajax control toolkit'.

Page 144/1110 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • What guides or standards do you use for version control in your team ?

    - by PaulHurleyuk
    I'm starting to do a small amount of development within my company. I'm intending to use Git for version control, and I'm interested to see what guidelines or standards people are using around version in their groups, similar to coding standards are often written within the group for the group. I'm assuming there will be things like; Commit often (at least every day/week/meeting etc) Release builds are always made from the master branch Prior to release, a new branch will be created for Testing and tagged as such. only bug fixes from this point onwards. The final release of this will be tagged as such and the bug fixes merged back into the trunk Each developer will have a public repo New features should get their own branch Obviously a lot of this will depend on what cvs you're using and how you've structured it. Similar Questions; http://stackoverflow.com/questions/273695/git-branch-naming-best-practices http://stackoverflow.com/questions/2006265/is-there-an-standard-naming-convention-for-git-tags

    Read the article

  • Does any faster centralized version control than SVN exists?

    - by Savageman
    Hello, I've been using SVN since a long time and now we're trying on Git. I'm not talking on the centralized / decentralized debate here. My only concern is speed. The latter tool is much faster. But sometimes, I NEED to work with a centralized approach, which is much more simple and less complex than the decentralized one. The learning curve is really fast, which saves a lot of time (while digging into decentralized would lead to a waste of time, given the learning curve is much longer and we encounter more problem when working with it). However, SVN is really slow compared to GIT, and I don't think it has anything to do with the centralized argument. Decentralized systems also have to deal with server connections and file transfert. So I can easilly imagine a faster implementation of centralized version control could exists. Does someone has any clue on this?

    Read the article

  • I work on local copies of files and upload them to a remote server on save. What version control sys

    - by 10goto10
    Here's my situation: My files are on a remote server (Linux). When I want to edit a file at home on my Windows machine, my editor (PSPad) downloads a copy. When I save the document, my editor uploads it to the server, overwriting the previous version. Is there a version control system, preferably GUI driven, that can handle this situation? Additional info: I probably can't install elaborate software on the remote server, but can on my own computer. Concurrent Versions System (CVS) is installed on the remote server. Uploading/downloading goes through an FTP-to-SFTP bridge set up with Bitvise Tunnelier.

    Read the article

  • How to embed a web browser control in a cross-platform application?

    - by Gil
    hi, I need to write quickly this application: a simple window that wraps a web browser control, that runs html pages. The Browser UI (e.g. Navigation buttons) should be suppressed. As a .net developer, I would embed the WebBrowser OCX in a Windows Form. But this has to run on Mac as well!! I found the following cross-platform candidates. Which one would you choose (in terms of simplicity, stableness, community support, etc.): 1) wxWidgets (www.kirix.com/labs/wxwebconnect.html) 2) QT: www.youtube.com/watch?v=Ee8eRwjbcFk&feature=related 3) MONO: www.mono-project.com/WebBrowser Thanks!

    Read the article

  • Is there a "dual user check-in" source control system?

    - by Zubair
    Are there any source control systems that require another user to validate the source code "before" it can be checked-in? I want to know as this is one technique to make sure that code quality is high. Update: There has been talk of "Branches" in the answers, and while I feel branches have there place I think that branchs are something different as when a developer's code is ready to go into the main branch it "should" be checked. Most often though I see that when this happens a lead developer or whoever is responsible for the merge into the main branch/stream just puts the code into the main branch as long as it "compiles" and does no more checks than that. I want the idea of two people putting their names to the code at an early stage so that it introduces some responsibility, and also because the code is cheaper to fix early on and is also fresh in the developers mind.

    Read the article

  • How would you use version control for personal data, like a personal website?

    - by nn
    This is more a use-case question, but I generate static files for a personal website using txt2tags. I was thinking of maybe storing this information in a git repository. Normally I use RCS since it's simplest, and I'm only a single user. But there just seems to be a large trend of people using git/svn/cvs/etc. for personal data, and I thought this may also be a good way to at least learn some of the basics of the tool. Obviously most of the learning is done in an environment where you collaborate. So back to the question: how would you use use a version control system such as git, to manage a personal website?

    Read the article

  • What are the merits of using the various VCS (Version Control Systems) that exist to track Drupal pr

    - by ZoFreX
    I'm trying to find the best version control strategy for my workflow with Drupal. I have multiple sites per install and I'm pretty sure I'll have a separate repository for each website. As far as I know, the VCSs worth considering are: SVN Bazaar (bzr) Git Mercurial (hg) I know how these compare to each other in general, but want to learn their merits/demerits for Drupal. If you're using (or have used) any of these for Drupal: What is your setup? What about the VCS you chose works well for managing Drupal projects? What doesn't?

    Read the article

  • What is the difference between uninstalling a program through Control Panel, and uninstalling via th

    - by sunpech
    What is the difference between uninstalling a program through Control Panel, and uninstalling via the Program's uninstall.exe? Example: C:\Program Files (x86)\Notepad++\uninstall.exe In general, I read that it's better to uninstall a program via window's Control Panel. But for the programs that have their own uninstall.exe, is there any real difference between the un-installations? Is the Control Panel's cleaner in removing dependencies?

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • socat and "no job control in this shell"

    - by Vi
    socat - exec:'bash -li',pty,stderr,ctty - bash: no job control in this shell What options should I use to get fully fledged shell as I get with ssh/sshd? I want be able to connect the shell to everything socat can handle (socks5, udp, openssl), but also to have a nice shell which correctly interprets all keys, various Ctrl+C/Ctrl+Z and jobs control. /* Requested tags: socat job-control */

    Read the article

  • Control-Backspace (unix-kill-rubout) for readline?

    - by Xepoch
    In readline(3) I should be able to map Control-Backspace to the same function as Control-W (unix-kill-rubout). Regardless of what I put in ~/.inputrc I'm unable to get this to be recognized. \C-\b: unix-kill-rubout ...for instance does not work. Can I map Control-Backspace to the unix-kill-rubout in readline?

    Read the article

  • Shaping outbound Traffic to Control Download Speeds with Linux

    - by Kyle Brandt
    I have a situation where a server makes lots of requests from big webservers all at the same time. Currently, I have not control over the amount of requests or the rate of the requests from the application that does this. The responses from these webservers is more than the internet line can handle. (Basically, we are launching a DoS on ourselves). I am going to get push to get this fixed at the application level, but for the time being, is there anyway I can use traffic shaping on the Linux server to control this? I know I can only shape outbound traffic, but maybe there is a way I can slow the TCP responses so the other side will detect congestion and this will help my situation? If there is anything like this with tc, what might the configuration look like? The idea is that the traffic control might help me control which packets get dropped before they reach my router.

    Read the article

  • OPTIONS request vs GET in Ajax

    - by user41172
    I have a PHP/javascript app that queries and returns info using an ajax request. On every server I've used so far, this works as expected, passing an Ajax GET request to the server and returning json data. On a new install, the query fails and returns nothing-- I inspected the request and it turns out that rather than passing the query as a GET, the server is passing it as an OPTIONS request. Is there any reason for this? I have no idea why this might happen. THanks!

    Read the article

  • control + enter in browser takes only .com?

    - by Abhilash M
    I have this strange problem, how exactly does control + enter decides the domain of a website?? If i type stackoverflow,then hit control+enter, it works and takes to homepage, but i type ubuntuforums, then hit control + enter, it does not recognise its ubuntuforums.org, but goes to ubuntuforums.com?? How does this exactly work? If i need to change this behaviour, how should i do it?

    Read the article

  • Nginx Cache-Control

    - by optixx
    Iam serving my static content with ngnix. location /static { alias /opt/static/blog/; access_log off; etags on; etag_hash on; etag_hash_method md5; expires 1d; add_header Pragma "public"; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } The resulting header looks like this: Cache-Control:public, must-revalidate, proxy-revalidate Cache-Control:max-age=86400 Connection:close Content-Encoding:gzip Content-Type:application/x-javascript; charset=utf-8 Date:Tue, 11 Sep 2012 08:39:05 GMT Etag:e2266fb151337fc1996218fafcf3bcee Expires:Wed, 12 Sep 2012 08:39:05 GMT Last-Modified:Tue, 11 Sep 2012 06:22:41 GMT Pragma:public Server:nginx/1.2.2 Transfer-Encoding:chunked Vary:Accept-Encoding Why is nginx sending 2 Cache-Control entries, could this be a problem for the clients?

    Read the article

  • Accessing ActiveX control through web server

    - by user847455
    I have developed the ActiveX control & register with Common CLSID number . using the CLSID number accessing the active X control on the internet explorer (as web page).using following object tag used in .html file OBJECT id="GlobasysActiveX" width="1000" height="480" runat="server" classid="CLSID:E86A9038-368D-4e8f-B389-FDEF38935B2F" i want to access this web page through web server .I have place this web page into the vitual directory & access using localhost\my.html it's working. but when i have accessed from LAN computer it will not access the activeX control from my computer . how to embed or download the activeX control form my computer into the LAN computer through web server thanks in advance

    Read the article

  • Remap control key in gnome-terminal?

    - by Colin
    I just installed Ubuntu to get more familiar with it since I'll be using it in a new job shortly. I use Macs at home and in my current job, so I'd like to make it as Mac-like as possible. I've remapped the command and control characters using the following .xmodmap: remove control = Control_L Control_R remove mod4 = Super_L Super_R add control = Super_L Super_R add mod4 = Control_L Control_R Which works great for everything except the terminal, since Ctrl-C is now mapped to CMD-C, and still conflicts with what I'd like to use to copy. Is there any way I can remap the Control key just for the terminal? I'm willing to consider gnome-terminal alternatives if required.

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • ASP.NET and WIF: Showing custom profile username as User.Identity.Name

    - by DigiMortal
    I am building ASP.NET MVC application that uses external services to authenticate users. For ASP.NET users are fully authenticated when they are redirected back from external service. In system they are logically authenticated when they have created user profiles. In this posting I will show you how to force ASP.NET MVC controller actions to demand existence of custom user profiles. Using external authentication sources with AppFabric Suppose you want to be user-friendly and you don’t force users to keep in mind another username/password when they visit your site. You can accept logins from different popular sites like Windows Live, Facebook, Yahoo, Google and many more. If user has account in some of these services then he or she can use his or her account to log in to your site. If you have community site then you usually have support for user profiles too. Some of these providers give you some information about users and other don’t. So only thing in common you get from all those providers is some unique ID that identifies user in service uniquely. Image above shows you how new user joins your site. Existing users who already have profile are directed to users homepage after they are authenticated. You can read more about how to solve semi-authorized users problem from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages. The other problem is related to usernames that we don’t get from all identity providers. Why is IIdentity.Name sometimes empty? The problem is described more specifically in my blog posting Identifying AppFabric Access Control Service users uniquely. Shortly the problem is that not all providers have claim called http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name. The following diagram illustrates what happens when user got token from AppFabric ACS and was redirected to your site. Now, when user was authenticated using Windows Live ID then we don’t have name claim in token and that’s why User.Identity.Name is empty. Okay, we can force nameidentifier to be used as name (we can do it in web.config file) but we have user profiles and we want username from profile to be shown when username is asked. Modifying name claim Now let’s force IClaimsIdentity to use username from our user profiles. You can read more about my profiles topic from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages and you can find some useful extension methods for claims identity from my blog posting Identifying AppFabric Access Control Service users uniquely. Here is what we do to set User.Identity.Name: we will check if user has profile, if user has profile we will check if User.Identity.Name matches the name given by profile, if names does not match then probably identity provider returned some name for user, we will remove name claim and recreate it with correct username, we will add new name claim to claims collection. All this stuff happens in Application_AuthorizeRequest event of our web application. The code is here. protected void Application_AuthorizeRequest() {     if (string.IsNullOrEmpty(User.Identity.Name))     {         var identity = User.Identity;         var profile = identity.GetProfile();         if (profile != null)         {             if (profile.UserName != identity.Name)             {                 identity.RemoveName();                   var claim = new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", profile.UserName);                 var claimsIdentity = (IClaimsIdentity)identity;                 claimsIdentity.Claims.Add(claim);             }         }     } } RemoveName extension method is simple – it looks for name claims of IClaimsIdentity claims collection and removes them. public static void RemoveName(this IIdentity identity) {     if (identity == null)         return;       var claimsIndentity = identity as ClaimsIdentity;     if (claimsIndentity == null)         return;       for (var i = claimsIndentity.Claims.Count - 1; i >= 0; i--)     {         var claim = claimsIndentity.Claims[i];         if (claim.ClaimType == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name")             claimsIndentity.Claims.RemoveAt(i);     } } And we are done. Now User.Identity.Name returns the username from user profile and you can use it to show username of current user everywhere in your site. Conclusion Mixing AppFabric Access Control Service and Windows Identity Foundation with custom authorization logic is not impossible but a little bit tricky. This posting finishes my little series about AppFabric ACS and WIF for this time and hopefully you found some useful tricks, tips, hacks and code pieces you can use in your own applications.

    Read the article

  • IRM and Consumerization

    - by martin.abrahams
    As the season of rampant consumerism draws to its official close on 12th Night, it seems a fitting time to discuss consumerization - whereby technologies from the consumer market, such as the Android and iPad, are adopted by business organizations. I expect many of you will have received a shiny new mobile gadget for Christmas - and will be expecting to use it for work as well as leisure in 2011. In my case, I'm just getting to grips with my first Android phone. This trend developed so much during 2010 that a number of my customers have officially changed their stance on consumer devices - accepting consumerization as something to embrace rather than resist. Clearly, consumerization has significant implications for information control, as corporate data is distributed to consumer devices whether the organization is aware of it or not. I daresay that some DLP solutions can limit distribution to some extent, but this creates a conflict between accepting consumerization and frustrating it. So what does Oracle IRM have to offer the consumerized enterprise? First and foremost, consumerization does not automatically represent great additional risk - if an enterprise seals its sensitive information. Sealed files are encrypted, and that fundamental protection is not affected by copying files to consumer devices. A device might be lost or stolen, and the user might not think to report the loss of a personally owned device, but the data and the enterprise that owns it are protected. Indeed, the consumerization trend is another strong reason for enterprises to deploy IRM - to protect against this expansion of channels by which data might be accidentally exposed. It also enables encryption requirements to be met even though the enterprise does not own the device and cannot enforce device encryption. Moving on to the usage of sealed content on such devices, some of our customers are using virtual desktop solutions such that, in truth, the sealed content is being opened and used on a PC in the normal way, and the user is simply using their device for display purposes. This has several advantages: The sensitive documents are not actually on the devices, so device loss and theft are even less of a worry The enterprise has another layer of control over how and where content is used, as access to the virtual solution involves another layer of authentication and authorization - defence in depth It is a generic solution that means the enterprise does not need to actively support the ever expanding variety of consumer devices - the enterprise just manages some virtual access to traditional systems using something like Citrix or Remote Desktop services. It is a tried and tested way of accessing sealed documents. People have being using Oracle IRM in conjunction with Citrix and Remote Desktop for several years. For some scenarios, we also have the "IRM wrapper" option that provides a simple app for sealing and unsealing content on a range of operating systems. We are busy working on other ways to support the explosion of consumer devices, but this blog is not a proper forum for talking about them at this time. If you are an Oracle IRM customer, we will be pleased to discuss our plans and your requirements with you directly on request. You can be sure that the blog will cover the new capabilities as soon as possible.

    Read the article

  • Resources for using TFS for Agile Project Development?

    - by Amy P
    Our company just installed TFS for us to start using for project development processes and source control. They want us to start using it to manage our projects as well. We have a small team, no current bug or task tracking software, and 2 developers of the 3 have experience with any actual methodologies. What books, websites, and/or other information can you recommend for us to use to get started?

    Read the article

  • Is there a term for quasi-open source proprietary software?

    - by mwhite
    Say a company wants to keep development of new features of a piece of software internal, but wants to make the source code for previous versions public, up to and including existing public features, so that other people can benefit from using and modifying the software themselves, and even possibly contribute changes that can be applied to the development branch. Is there a term for this sort of arrangement, and what is the best way of accomplishing it using existing version control tools and platforms?

    Read the article

  • OnApplyTemplate not called in Custom Control

    - by Lasse O
    I am SICK AND TIRED of WPF and all its "if thats dosen't work, try this" fucking fixes ALL THE TIME, well heres one the collection: I have a Custom Control which uses some PART controls: [TemplatePart(Name = "PART_TitleTextBox", Type = typeof(TextBox))] [TemplatePart(Name = "PART_TitleIndexText", Type = typeof(Label))] [TemplatePart(Name = "PART_TimeCodeInText", Type = typeof(TextBlock))] [TemplatePart(Name = "PART_TimeCodeOutText", Type = typeof(TextBlock))] [TemplatePart(Name = "PART_ApprovedImage", Type = typeof(Image))] [TemplatePart(Name = "PART_CommentsImage", Type = typeof(Image))] [TemplatePart(Name = "PART_BookmarkedImage", Type = typeof(Image))] public class TitleBoxNew : Control { This control is overriding OnApplyTemplate: public override void OnApplyTemplate() { base.OnApplyTemplate(); InitializeEvents(); } Which works well, most of the time.. I have added the control inside a custom tab control in a window and somehow OnApplyTemplate is never called for that control! WHY IS WPF SO FUCKING RANDOM!?

    Read the article

  • Webbrowser control: Get element value and store it into a variable.

    - by Khou
    Winform: Web browser Control The web browser has the following displayed content, within a html table. [Element] [Value] Name John Smith Email [email protected] For the example above, the html code, might look something like this <table> <tbody> <tr> <td><label class="label">Name</label></td> <td class="normaltext">John Smith</td> </tr> <tr> <td><label class="label">Email</label></td> <td><span class="normaltext">[email protected]</span></td> </tr> </tr> </tbody> </table> . I want to get the element value, the value to the right of the label. What is the best way to do this? .

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >