Search Results

Search found 10517 results on 421 pages for 'foo bar'.

Page 110/421 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Make website reachable through domain?

    - by Msmit1993
    I'm learning to use IIS 7 but I don't understand how I can make my website available through a domain. I have a domain as example I will call it www.test.com I have made a website in IIS, running on port 80 and can be viewed by typing the IP of the server in the address bar of my browser. So if I type www.test.com in the address bar how do I make my IIS website show up, without a redirect of course, I don't want users to see the IP in the address bar.

    Read the article

  • mod_ReWrite to remove part of a URL

    - by Jack
    Someone has incorrectly linked to some of my urls causing 404 erros in Google Webmaster Tools. Here is an example Linked URL: http://www.example.com/foo-%E2%80%8Bbar.html Correct URL: http://www.example.com/foor-bar.html I would like to 301 redirect any instance of this kind of incorrect linking to the correct URL. I have tried the following but it generates 404 Errors site wide. Options +FollowSymLinks RewriteEngine on RewriteRule ^foo-(.*)bar\.html$ http://www.example.com/foo-bar\.html? [L,R=301] Could anyone let me know what I am doing wrong?

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • Find directories that do not contain a directory?

    - by erikcw
    I'm trying to figure out how to use the linux "find" command (or another command that will get the job done) to return a list of file paths/directories that do not contain a directory of a certain name. ~/web/domain1.com/public_html/bar ~/web/domain2.com/public_html/ ~/web/domain3.com/public_html/bar ~/web/domain4.com/public_html/ I want all of the paths that don't contain the directory named "bar" (domain2.com and domain4.com). Any idea how I can get find to output such a list? Thanks!

    Read the article

  • How to remove a non-empty directory which is not owned by the user in Linux?

    - by Alex B
    If a directory "foo" is owned by user A and contains a directory "bar", which is owned by root, user A can simply remove it with rmdir, which is logical, because "foo" is writable by user A. But if the directory "bar" contains another root-owned file, the directory can't be removed, because files in it must be removed first, so it becomes empty. But "bar" itself is not writable, so it's not possible to remove files in it. Is there a way around it? Or, convince me otherwise why it's necessary.

    Read the article

  • Actionscript 3: Monitoring the activity level for multiple Microphones doesn't seem to work.

    - by Dave
    For a project I want to show all available webcams and microphones, so that the user can easily select whichever webcam/microphone combination they prefer. I run into an issue with the microphones listing though. Each microphone is listed with an activity animation and it's name. I am able to list all Microphones just fine (using the Microphone.names Array), but it seems like I can only get the activity viewer to work for one microphone. The other microphones show up with '-1' activity, which (as far as I know) is Flex for 'present, but not in use'. When unplugging the microphone that does show activity, the next one (in my case, the mic-in line on my motherboard) shows up with '0' activity (it's not connected, so that makes sense). During my testing I have a total of 3 microphones available, the not-connected onboard mic-in port, and two connected microphones. For testing purposes I use a timer that traces the current microphone activity each 100ms and the graph is also shown. It does not seem to matter what default microphone I set via flash' settings panel. The code I've only attached the revelant code snippets below to make it easier for you to read through them. Please let me know if you prefer the entire code. Main application.mxml Note: cont is a VBox. i is defined before this code snippet. var mics:Array = Microphone.names; for(i=0; i < mics.length; i++){ var mic:settingsMicEntry = new assets.settingsMicEntry; mic.d = {name: mics[i], index: i}; cont.addChild(mic); } assets/settingsMicEntry.mxml timer is defined before this code snippet. the SoundTransform is added to silence local microphone playback. Excluding this code does not solve the problem, sadly (I've tried). display is an MXML Canvas object. mic = Microphone.getMicrophone(d.index); if(mic){ // Temporary: The Microphones' visualizer var bar:Box = new Box(); bar.y = 50; bar.height = 0; bar.width = 66; bar.setStyle("backgroundColor", 0x003300); display.addChild(bar); var tf:SoundTransform = new SoundTransform(0); mic.setLoopBack(true); mic.soundTransform = tf; timer = new Timer(100); timer.addEventListener(TimerEvent.TIMER, function(e:TimerEvent):void{ var h:int = Math.floor((display.height/100)*mic.activityLevel); bar.height = (h>-1) ? h : 0; bar.y = (h>-1) ? display.height-h : display.height; trace('TIMER: '+h+' from '+d.name); }); timer.start(); } I'm pulling my hear out here, so any help is much appreciated! Thanks, -Dave Ps.: Pardon the messiness of the code!

    Read the article

  • Introducing SSIS Reporting Pack for SQL Server code-named Denali

    - by jamiet
    In recent blog posts I have introduced the new SSIS Catalog that is forthcoming in SQL Server Code-named Denali: What's new in SSIS in Denali Introduction to SSIS Projects in Denali Parameters in SSIS In Denali SSIS Server, Catalogs, Environments and Environment Variables in SSIS in Denali The SSIS Catalog is responsible for executing SSIS packages and also for capturing the metadata from those executions. However, at the time of writing there is no mechanism provided to view analyse and drill into that metadata and that is the reason that I am, in this blog post, introducing a suite of SSIS Catalog reports called the SSIS Reporting Pack which you can download from my SkyDrive at http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. In this first release the SSIS Reporting Pack includes five reports: Catalog – A high-level summary of all activity in the Catalog Folders – A summary of activity in each Catalog Folder Folder – Project-level activity per single Folder Executions – A visualisation of all executions per Folder/Project/Package/Environment or subset thereof Execution – Information about an individual execution Here is a screenshot of the Executions report: Notice that the SSIS Reporting Pack provides a visual overview of all executions in the Catalog. Each execution is represented as a bar on the bar chart, the success or otherwise of each execution is indicated by the colour of the bar and the execution time is indicated by the bar height. I have recorded a video that gives an overview of the SSIS Reporting which I have embedded below. If you are having any trouble viewing the video go see it at http://vimeo.com/17617974 I must stress that this is a very early version of the SSIS Reporting Pack and I am expecting it to change a lot over the coming year. I am very keen to get some feedback about this, specifically: let me know if anything does not work as you expect give me your feature requests The easiest way to get hold of of me for now is within the comments section of this blog post. That’s all for now. I hope the SSIS Reporting Pack proves useful and I look forward to hearing your feedback. Lastly, that download link again: http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. @jamiet

    Read the article

  • Open Multiple Sites Without Reopening the Menus in Firefox

    - by Asian Angel
    Are you frustrated with having to reopen your menus for each website that you need or want to view? Now you can keep those menus open while opening multiple websites with the Stay-Open Menu extension for Firefox. Stay-Open Menu in Action You can start using the extension as soon as you have installed it…simply access your favorite links in the “Bookmarks Menu, Bookmarks Toolbar, Awesome Bar, or History Menu” and middle click on the appropriate entries. Here you can see our browser opening the Productive Geek website and that the “Bookmarks Menu” is still open. As soon as you left click on a link or click outside the menus they will close normally like before. Note: Middle clicked links open in new tabs. The only time during our tests that a newly opened link “remained in the background” was for any links opened from the “Awesome Bar”. But as soon as the “Awesome Bar” was closed the new tabs automatically focused to the front. A link being opened from the “History Menu”…still open while the webpage is loading. Options The options are simple to sort through…enable or disable the additional “stay open” functions and enable automatic menu closing if desired. Conclusion If you get frustrated with having to reopen menus to access multiple webpages at one time then you might want to give this extension a try. Links Download the Stay-Open Menu extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Make Firefox Use Multiple Rows of TabsDisable Web Site Window Resizing in FirefoxQuick Hits: 11 Firefox Tab How-TosPrevent Annoying Websites From Messing With the Right-Click Menu in FirefoxJatecblog Moves to How-To Geek Blogs (Linux Readers Should Subscribe) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Have you changed your coding style recently? It wasn't hard wasn't it?

    - by Ernelli
    I've used to write code in C-like languages using the Allman style, regarding the position of braces. void foo(int bar) { if(bar) { //... } else return; //... } Now the last two years I have been working mostly in JavaScript and when we adopted jslint as part of our QA process, I had to adopt to the Crockford way of doing things. So I had to change the coding style into: function foo(bar) { if (bar) { //... } else { return; } //... } Now apart from comparing a C/C++ example with JavaScript, I must say that my JavaScript-Crockford-coding style now has spread into my C/C++/Java coding when I revise old projects and work on code in those languages that for example has no problem with single line statements or ambiguous newline insertion. I used to consider the later format very awkward, I have never had any problems with adapting my coding style to the one chosen by my predecessors, except for when I was a Junior developer mostly being the solve developer on legacy projects and the first thing I did was to change the indenting style. But now after a couple of months I consider the Allman style a little bit too spacious and feel more comfortable with the K&R-like style. Have you changed your coding style during your career?

    Read the article

  • Universal navigation menu across domains

    - by Jon Harley
    I'd like to start by saying that I've searched for hours and could not find a definitive answer to my question. Across different sites on different second-level domains exists a universal navigation bar with a collection of roughly 30 links. This universal bar is exactly the same for every page on each domain. The bar's HTML, CSS and JavaScript are all stored in a subfolder for each domain and the HTML is embedded upon serving the page and is not being injected on the client side. None of the links use any rel directives and are as vanilla as can be. My question is about Google's duplicate content rule. Would something like this be considered duplicate content? Matt Cutt's blog post about duplicate content mentions boilerplate repetition, but then he mentions lengthy legalese. Since the text in this universal bar is brief and uses common terms, I wonder if this same rule applies. If this is considered duplicate content, what would be a good way to correct the problem? Thank you for your help.

    Read the article

  • Which is a better practice - helper methods as instance or static?

    - by Ilian Pinzon
    This question is subjective but I was just curious how most programmers approach this. The sample below is in pseudo-C# but this should apply to Java, C++, and other OOP languages as well. Anyway, when writing helper methods in my classes, I tend to declare them as static and just pass the fields if the helper method needs them. For example, given the code below, I prefer to use Method Call #2. class Foo { Bar _bar; public void DoSomethingWithBar() { // Method Call #1. DoSomethingWithBarImpl(); // Method Call #2. DoSomethingWithBarImpl(_bar); } private void DoSomethingWithBarImpl() { _bar.DoSomething(); } private static void DoSomethingWithBarImpl(Bar bar) { bar.DoSomething(); } } My reason for doing this is that it makes it clear (to my eyes at least) that the helper method has a possible side-effect on other objects - even without reading its implementation. I find that I can quickly grok methods that use this practice and thus help me in debugging things. Which do you prefer to do in your own code and what are your reasons for doing so?

    Read the article

  • xmodmap modifications aren't enough - anything else I can do?

    - by Codemonkey
    I'm using an Apple keyboard which has some annoyances compared to other keyboards. Namely, the Alt_L and Super_L keys are swapped, and the bar and less keys are swapped ("|" and "<"). I've written an Xmodmap file to swap the keys back: keycode 49 = less greater less greater onehalf threequarters keycode 64 = Super_L NoSymbol Super_L keycode 94 = bar section bar section brokenbar paragraph keycode 108 = Super_R NoSymbol Super_R keycode 133 = Alt_L Meta_L Alt_L Meta_L keycode 134 = Alt_R Meta_R Alt_R Meta_R I did this by identifying the keys using xev and the default modmap xmodmap -pke and swapping the keycodes. xev now identifies all my keys as correct, which is awesome! I can also use the correct keys to type the bar and less than symbols. (I followed this answer on askubuntu: How do I remap certain keys?) But it seems the change isn't very deep. For instance, the Super key is now broken in the Compiz Settings Manager. No shortcuts involving the Super key works (but the Alt key does). Also the settings dialog for Gnome Do doesn't heed the changes in xmodmap, and I can't open the Gnome Do window anymore if I use any of the remapped keys. So to summarize, everything broke. I would like a deeper way of telling Ubuntu (or any other Linux distro for that matter) which keys are which on the keyboard. Is there a way to edit the Keyboard Layout directly? I'm using the Norwegian Bokmål keyboard layout. Does it reside in a file somewhere I could edit? Any comments, previous experiences or relevant stray thoughts would be greatly appreciated - Thanks

    Read the article

  • Creating Ubuntu Browser App Frames

    - by user73006
    After watching the video i am inspired to create one browser but stuck at one place, could you please help me with this. Requirement = - Like you displayed in your Video i wan create Multiple Buttons in my Toolbar which will open Second ToolBar or Popup Window. - From that Pop Window i wanted to Select Specific Button Which will open My Required Browser. Question - - As displayed in your Video i create new BUtton and If i try to open new link using that it works but now i want to display tool bar or Popup window once any one click on that button, how can i do that.The Second Tool Bar Need to be Activated only after clicking on that button. Things i Tried - - As per my understanding i create Second Toolbar and on that tool bar i have created Button, now i wan know how do i link that tool bar with my Browser Toolbar button. - I tried that by passing Signal Property in Second Toolbar in Quickly but something is missing. MY Code class TvbrowserWindow(Window): gtype_name = "TvbrowserWindow" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the main window""" super(TvbrowserWindow, self).finish_initializing(builder) self.AboutDialog = AboutTvbrowserDialog self.PreferencesDialog = PreferencesTvbrowserDialog # Code for other initialization actions should be added here. self.refreshbutton=self.builder.get_object("refreshbutton") self.SONY=self.builder.get_object("SONY") self.urlentry=self.builder.get_object("urlentry") self.scrolledwindow1=self.builder.get_object("scrolledwindow1") self.webview = WebKit.WebView() self.scrolledwindow1.add(self.webview) self.webview.show() def on_refreshbutton_clicked(self, widget): print "refresh" def on_urlentry_activate(self, widget): url = widget.get_text() print url self.webview.open(url)

    Read the article

  • Ubuntu 12.10 Unity & Gnome Shell problems

    - by user109292
    I'm experiencing some problems since I decided to upgrade Ubuntu to 12.10 version two days ago. Firstly, I cannot select the Unity environment I previously used on 12.04 without opening the terminal with Ctrl+Alt+T and typing setsid unity. When I select the Unity environment on the account page when I start the computer, it automatically switch back to Gnome and launch my session. I tried to set back Unity using the setsid unity tip, and it worked fine. But after few minutes, everything freeze and I cannot control anything anymore. The only option left is to press the Power button of my Asus EeePC and switch everything off. Question 1 : What can I do to get my Unity environment back on 12.10 from the start, without using the terminal every time? What should I do to prevent the all system to freeze once done? Secondly, and since I cannot use Unity for new, I'm using an other interface, GNOME Shell. What's bothering me is that the Activities bar (let's call it like that, 'cause I don't know the proper name) and the Internet bar (or any bar from any other window) cannot merge into one another, reducing the display of the screen I'm actually using to peanuts! Question 2 : Is there a way to merge those two bars? Or is there a way to hide the Activities bar when I'm not using it like on Unity environment?

    Read the article

  • What's is the point of PImpl pattern while we can use interface for same purpose in C++?

    - by ZijingWu
    I see a lot of source code which using PIMPL idiom in C++. I assume Its purposes are hidden the private data/type/implementation, so it can resolve dependence, and then reduce compile time and header include issue. But interface class in C++ also have this capability, it can also used to hidden data/type and implementation. And to hidden let the caller just see the interface when create object, we can add an factory method in it declaration in interface header. The comparison is: Cost: The interface way cost is lower, because you doesn't even need to repeat the public wrapper function implementation void Bar::doWork() { return m_impl->doWork(); }, you just need to define the signature in the interface. Well understand: The interface technology is more well understand by every C++ developer. Performance: Interface way performance not worse than PIMPL idiom, both an extra memory access. I assume the performance is same. Following is the pseudocode code to illustrate my question: // Forward declaration can help you avoid include BarImpl header, and those included in BarImpl header. class BarImpl; class Bar { public: // public functions void doWork(); private: // You doesn't need to compile Bar.cpp after change the implementation in BarImpl.cpp BarImpl* m_impl; }; The same purpose can be implement using interface: // Bar.h class IBar { public: virtual ~IBar(){} // public functions virtual void doWork() = 0; }; // to only expose the interface instead of class name to caller IBar* createObject(); So what's the point of PIMPL?

    Read the article

  • Python hashable dicts

    - by TokenMacGuy
    As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (EDIT: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine) The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help. >>> cache = {} >>> rule = {"foo":"bar"} >>> cache[(rule, "baz")] = "quux" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'dict' >>> I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, collections.namedtuple has a very different syntax, but can be used as a key. continuing from above session: >>> from collections import namedtuple >>> Rule = namedtuple("Rule",rule.keys()) >>> cache[(Rule(**rule), "baz")] = "quux" >>> cache {(Rule(foo='bar'), 'baz'): 'quux'} Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. But combining the rules together is much more dynamic. In particular, I'd like a simple way to have rules override other rules, but collections.namedtuple has no analogue to dict.update(). Edit: An additional problem with namedtuples is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: >>> you = namedtuple("foo",["bar","baz"]) >>> me = namedtuple("foo",["bar","quux"]) >>> you(bar=1,baz=2) == me(bar=1,quux=2) True >>> bob = namedtuple("foo",["baz","bar"]) >>> you(bar=1,baz=2) == bob(bar=1,baz=2) False tl'dr: How do I get dicts that can be used as keys to other dicts? Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling dict.__setitem__(instance, key, value) but we're all adults here. class hashdict(dict): """ hashable dict implementation, suitable for use as a key into other dicts. >>> h1 = hashdict({"apples": 1, "bananas":2}) >>> h2 = hashdict({"bananas": 3, "mangoes": 5}) >>> h1+h2 hashdict(apples=1, bananas=3, mangoes=5) >>> d1 = {} >>> d1[h1] = "salad" >>> d1[h1] 'salad' >>> d1[h2] Traceback (most recent call last): ... KeyError: hashdict(bananas=3, mangoes=5) based on answers from http://stackoverflow.com/questions/1151658/python-hashable-dicts """ def __key(self): return tuple(sorted(self.items())) def __repr__(self): return "{0}({1})".format(self.__class__.__name__, ", ".join("{0}={1}".format( str(i[0]),repr(i[1])) for i in self.__key())) def __hash__(self): return hash(self.__key()) def __setitem__(self, key, value): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def __delitem__(self, key): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def clear(self): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def pop(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def popitem(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def setdefault(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def update(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def __add__(self, right): result = hashdict(self) dict.update(result, right) return result if __name__ == "__main__": import doctest doctest.testmod()

    Read the article

  • Using WeakReference to resolve issue with .NET unregistered event handlers causing memory leaks.

    - by Eric
    The problem: Registered event handlers create a reference from the event to the event handler's instance. If that instance fails to unregister the event handler (via Dispose, presumably), then the instance memory will not be freed by the garbage collector. Example: class Foo { public event Action AnEvent; public void DoEvent() { if (AnEvent != null) AnEvent(); } } class Bar { public Bar(Foo l) { l.AnEvent += l_AnEvent; } void l_AnEvent() { } } If I instantiate a Foo, and pass this to a new Bar constructor, then let go of the Bar object, it will not be freed by the garbage collector because of the AnEvent registration. I consider this a memory leak, and seems just like my old C++ days. I can, of course, make Bar IDisposable, unregister the event in the Dispose() method, and make sure to call Dispose() on instances of it, but why should I have to do this? I first question why events are implemented with strong references? Why not use weak references? An event is used to abstractly notify an object of changes in another object. It seems to me that if the event handler's instance is no longer in use (i.e., there are no non-event references to the object), then any events that it is registered with should automatically be unregistered. What am I missing? I have looked at WeakEventManager. Wow, what a pain. Not only is it very difficult to use, but its documentation is inadequate (see http://msdn.microsoft.com/en-us/library/system.windows.weakeventmanager.aspx -- noticing the "Notes to Inheritors" section that has 6 vaguely described bullets). I have seen other discussions in various places, but nothing I felt I could use. I propose a simpler solution based on WeakReference, as described here. My question is: Does this not meet the requirements with significantly less complexity? To use the solution, the above code is modified as follows: class Foo { public WeakReferenceEvent AnEvent = new WeakReferenceEvent(); internal void DoEvent() { AnEvent.Invoke(); } } class Bar { public Bar(Foo l) { l.AnEvent += l_AnEvent; } void l_AnEvent() { } } Notice two things: 1. The Foo class is modified in two ways: The event is replaced with an instance of WeakReferenceEvent, shown below; and the invocation of the event is changed. 2. The Bar class is UNCHANGED. No need to subclass WeakEventManager, implement IWeakEventListener, etc. OK, so on to the implementation of WeakReferenceEvent. This is shown here. Note that it uses the generic WeakReference that I borrowed from here: http://damieng.com/blog/2006/08/01/implementingweakreferencet I had to add Equals() and GetHashCode() to his class, which I include below for reference. class WeakReferenceEvent { public static WeakReferenceEvent operator +(WeakReferenceEvent wre, Action handler) { wre._delegates.Add(new WeakReference<Action>(handler)); return wre; } public static WeakReferenceEvent operator -(WeakReferenceEvent wre, Action handler) { foreach (var del in wre._delegates) if (del.Target == handler) { wre._delegates.Remove(del); return wre; } return wre; } HashSet<WeakReference<Action>> _delegates = new HashSet<WeakReference<Action>>(); internal void Invoke() { HashSet<WeakReference<Action>> toRemove = null; foreach (var del in _delegates) { if (del.IsAlive) del.Target(); else { if (toRemove == null) toRemove = new HashSet<WeakReference<Action>>(); toRemove.Add(del); } } if (toRemove != null) foreach (var del in toRemove) _delegates.Remove(del); } } public class WeakReference<T> : IDisposable { private GCHandle handle; private bool trackResurrection; public WeakReference(T target) : this(target, false) { } public WeakReference(T target, bool trackResurrection) { this.trackResurrection = trackResurrection; this.Target = target; } ~WeakReference() { Dispose(); } public void Dispose() { handle.Free(); GC.SuppressFinalize(this); } public virtual bool IsAlive { get { return (handle.Target != null); } } public virtual bool TrackResurrection { get { return this.trackResurrection; } } public virtual T Target { get { object o = handle.Target; if ((o == null) || (!(o is T))) return default(T); else return (T)o; } set { handle = GCHandle.Alloc(value, this.trackResurrection ? GCHandleType.WeakTrackResurrection : GCHandleType.Weak); } } public override bool Equals(object obj) { var other = obj as WeakReference<T>; return other != null && Target.Equals(other.Target); } public override int GetHashCode() { return Target.GetHashCode(); } } It's functionality is trivial. I override operator + and - to get the += and -= syntactic sugar matching events. These create WeakReferences to the Action delegate. This allows the garbage collector to free the event target object (Bar in this example) when nobody else is holding on to it. In the Invoke() method, simply run through the weak references and call their Target Action. If any dead (i.e., garbage collected) references are found, remove them from the list. Of course, this only works with delegates of type Action. I tried making this generic, but ran into the missing where T : delegate in C#! As an alternative, simply modify class WeakReferenceEvent to be a WeakReferenceEvent, and replace the Action with Action. Fix the compiler errors and you have a class that can be used like so: class Foo { public WeakReferenceEvent<int> AnEvent = new WeakReferenceEvent<int>(); internal void DoEvent() { AnEvent.Invoke(5); } } Hopefully this will help someone else when they run into the mystery .NET event memory leak!

    Read the article

  • Adapting non-iterable containers to be iterated via custom templatized iterator

    - by DAldridge
    I have some classes, which for various reasons out of scope of this discussion, I cannot modify (irrelevant implementation details omitted): class Foo { /* ... irrelevant public interface ... */ }; class Bar { public: Foo& get_foo(size_t index) { /* whatever */ } size_t size_foo() { /* whatever */ } }; (There are many similar 'Foo' and 'Bar' classes I'm dealing with, and it's all generated code from elsewhere and stuff I don't want to subclass, etc.) [Edit: clarification - although there are many similar 'Foo' and 'Bar' classes, it is guaranteed that each "outer" class will have the getter and size methods. Only the getter method name and return type will differ for each "outer", based on whatever it's "inner" contained type is. So, if I have Baz which contains Quux instances, there will be Quux& Baz::get_quux(size_t index), and size_t Baz::size_quux().] Given the design of the Bar class, you cannot easily use it in STL algorithms (e.g. for_each, find_if, etc.), and must do imperative loops rather than taking a functional approach (reasons why I prefer the latter is also out of scope for this discussion): Bar b; size_t numFoo = b.size_foo(); for (int fooIdx = 0; fooIdx < numFoo; ++fooIdx) { Foo& f = b.get_foo(fooIdx); /* ... do stuff with 'f' ... */ } So... I've never created a custom iterator, and after reading various questions/answers on S.O. about iterator_traits and the like, I came up with this (currently half-baked) "solution": First, the custom iterator mechanism (NOTE: all uses of 'function' and 'bind' are from std::tr1 in MSVC9): // Iterator mechanism... template <typename TOuter, typename TInner> class ContainerIterator : public std::iterator<std::input_iterator_tag, TInner> { public: typedef function<TInner& (size_t)> func_type; ContainerIterator(const ContainerIterator& other) : mFunc(other.mFunc), mIndex(other.mIndex) {} ContainerIterator& operator++() { ++mIndex; return *this; } bool operator==(const ContainerIterator& other) { return ((mFunc.target<TOuter>() == other.mFunc.target<TOuter>()) && (mIndex == other.mIndex)); } bool operator!=(const ContainerIterator& other) { return !(*this == other); } TInner& operator*() { return mFunc(mIndex); } private: template<typename TOuter, typename TInner> friend class ContainerProxy; ContainerIterator(func_type func, size_t index = 0) : mFunc(func), mIndex(index) {} function<TInner& (size_t)> mFunc; size_t mIndex; }; Next, the mechanism by which I get valid iterators representing begin and end of the inner container: // Proxy(?) to the outer class instance, providing a way to get begin() and end() // iterators to the inner contained instances... template <typename TOuter, typename TInner> class ContainerProxy { public: typedef function<TInner& (size_t)> access_func_type; typedef function<size_t ()> size_func_type; typedef ContainerIterator<TOuter, TInner> iter_type; ContainerProxy(access_func_type accessFunc, size_func_type sizeFunc) : mAccessFunc(accessFunc), mSizeFunc(sizeFunc) {} iter_type begin() const { size_t numItems = mSizeFunc(); if (0 == numItems) return end(); else return ContainerIterator<TOuter, TInner>(mAccessFunc, 0); } iter_type end() const { size_t numItems = mSizeFunc(); return ContainerIterator<TOuter, TInner>(mAccessFunc, numItems); } private: access_func_type mAccessFunc; size_func_type mSizeFunc; }; I can use these classes in the following manner: // Sample function object for taking action on an LMX inner class instance yielded // by iteration... template <typename TInner> class SomeTInnerFunctor { public: void operator()(const TInner& inner) { /* ... whatever ... */ } }; // Example of iterating over an outer class instance's inner container... Bar b; /* assume populated which contained items ... */ ContainerProxy<Bar, Foo> bProxy( bind(&Bar::get_foo, b, _1), bind(&Bar::size_foo, b)); for_each(bProxy.begin(), bProxy.end(), SomeTInnerFunctor<Foo>()); Empirically, this solution functions correctly (minus any copy/paste or typos I may have introduced when editing the above for brevity). So, finally, the actual question: I don't like requiring the use of bind() and _1 placeholders, etcetera by the caller. All they really care about is: outer type, inner type, outer type's method to fetch inner instances, outer type's method to fetch count inner instances. Is there any way to "hide" the bind in the body of the template classes somehow? I've been unable to find a way to separately supply template parameters for the types and inner methods separately... Thanks! David

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Stretching an ADF Faces Component to (near) 100%

    - by Christian David Straub
    In the past, many users would want their component to stretch to fill 100% of a horizontal area. However, to account for scrollbars that may or may not have been there, they would set the percentage to 98%, etc.A much better way to do this is to use the new "AFStretchWidth" style class, which will do this automatically for you.For instance, avoid this:<af:foo inlineStyle="98%" />and instead do this:<af:foo styleClass="AFStretchWidth" />You can learn more about ADF Faces layout management here.

    Read the article

  • Beginner’s Guide to Flock, the Social Media Browser

    - by Asian Angel
    Are you wanting a browser that can work as a social hub from the first moment that you start it up? If you love the idea of a browser that is ready to go out of the box then join us as we look at Flock. During the Install Process When you are installing Flock there are two install windows that you should watch for. The first one lets you choose between the “Express Setup & Custom Setup”. We recommend the “Custom Setup”. Once you have selected the “Custom Setup” you can choose which of the following options will enabled. Notice the “anonymous usage statistics” option at the bottom…you can choose to leave this enabled or disable it based on your comfort level. The First Look When you start Flock up for the first time it will open with three tabs. All three are of interest…especially if this is your first time using Flock. With the first tab you can jump right into “logging in/activating” favorite social services within Flock. This page is set to display each time that you open Flock unless you deselect the option in the lower left corner. The second tab provides a very nice overview of Flock and its’ built-in social management power. The third and final page can be considered a “Personal Page”. You can make some changes to the content displayed for quick and easy access and/or monitoring “Twitter Search, Favorite Feeds, Favorite Media, Friend Activity, & Favorite Sites”. Use the “Widget Menu” in the upper left corner to select the “Personal Page Components” that you would like to use. In the upper right corner there is a built-in “Search Bar” and buttons for “Posting to Your Blog & Uploading Media”. To help personalize the “My World Page” just a bit more you can even change the text to your name or whatever best suits your needs. The Flock Toolbar The “Flock Toolbar” is full of social account management goodness. In order from left to right the buttons are: My World (Homepage), Open People Sidebar, Open Media Bar, Open Feeds Sidebar, Webmail, Open Favorites Sidebar, Open Accounts and Services Sidebar, Open Web Clipboard Sidebar, Open Blog Editor, & Open Photo Uploader. The buttons will be “highlighted” with a blue background to help indicate which area you are in. The first area will display a listing of people that you are watching/following at the services shown here. Clicking on the “Media Bar Button” will display the following “Media Slider Bar” above your “Tab Bar”. Notice that there is a built-in “Search Bar” on the right side. Any photos, etc. clicked on will be opened in the currently focused tab below the “Media Bar”. Here is a listing of the “Media Streams” available for viewing. By default Flock will come with a small selection of pre-subscribed RSS Feeds. You can easily unsubscribe, rearrange, add custom folders, or non-categorized feeds as desired. RSS Feeds subscribed to here can be viewed combined together as a single feed (clickable links) in the “My World Page”. or can be viewed individually in a new tab. Very nice! Next on the “Flock Toolbar is the “Webmail Button”. You can set up access to your favorite “Yahoo!, Gmail, & AOL Mail” accounts from here. The “Favorites Sidebar” combines your “Browser History & Bookmarks” into one convenient location. The “Accounts and Services Sidebar” gives you quick and easy access to get logged into your favorite social accounts. Clicking on any of the links will open that particular service’s login page in a new tab. Want to store items such as photos, links, and text to add into a blog post or tweet later on? Just drag and drop them into the “Web Clipboard Sidebar” for later access. Clicking on the “Blog Editor Button” will open up a separate blogging window to compose your posts in. If you have not logged into or set up an account yet in Flock you will see the following message window. The “Blogging Window”…nice, simple, and straightforward. If you are not already logged into your photo account(s) then you will see the following message window when you click on the “Photo Uploader Button”. Clicking “OK” will open the “Accounts and Services Sidebar” with compatible photo services highlighted in a light yellow color. Log in to your favorite service to start uploading all those great images. After Setting Up Here is what our browser looked like after setting up some of our favorite services. The Twitter feed is certainly looking nice and easy to read through… Some tweaking in the “RSS Feeds Sidebar” makes for a perfect reading experience. Keeping up with our e-mail is certainly easy to do too. A look back at the “Accounts and Services Sidebar” shows that all of our accounts are actively logged in (green dot on the right side). Going back to our “My World Page” you can see how nice everything looks for monitoring our “Friend Activity & Favorite Feeds”. Moving on to regular browsing everything is looking very good… Flock is a perfect choice for anyone wanting a browser and social hub all built into a single app. Conclusion Anyone who loves keeping up with their favorite social services while browsing will find using Flock to be a wonderful experience. You literally get the best of both worlds with this browser. Links Download Flock The Official Flock Extensions Homepage The Official Flock Toolbar Homepage Similar Articles Productive Geek Tips Add Color Coding to Windows 7 Media Center Program GuideAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogHow to use an ISO image on Ubuntu LinuxAdvertise on How-To GeekFixing When Windows Media Player Library Won’t Let You Add Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • Is a subdomain per service a good idea for SEO?

    - by Kennie R.
    I am creating a site with quite a few services, such as a free account service, and of course a subdomain for my site's blog and then for article base and other related services, would having them all on subdomains be a good idea? Are there any caveats you are aware of in existing search engines for this? I believe mapping foo.example.com to example.com/foo to provide an alternative just in case is a good idea for sitemaps, I like to keep things clean.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >