Search Results

Search found 10688 results on 428 pages for 'dynamic pdf'.

Page 409/428 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Twitter ?? Nashorn ????(??)

    - by Homma
    ???? Nashorn ? Java ??????? Twitter ???????????????????? JavaFX ??????????????? ????? ??? jlaskey ??? Nashorn Blog ????????????? https://blogs.oracle.com/nashorn/entry/nashorn_in_the_twitterverse_continued ???????? ?? Twitter ???????????????????????? JavaFX ??????????????????????????????? Nashorn ?? JavaFX ??????????????JavaFX ???????????????????????????????????????Nashorn ? Java ????????????????????????????????????(JavaFX ?????????????????????)? ?????????????????????????????????????????????? Twitter ????????????????????????? var twitter4j = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query = twitter4j.Query; function getTrendingData() { var twitter = new TwitterFactory().instance; var query = new Query("nashorn OR nashornjs"); query.since("2012-11-21"); query.count = 100; var data = {}; do { var result = twitter.search(query); var tweets = result.tweets; for each (var tweet in tweets) { var date = tweet.createdAt; var key = (1900 + date.year) + "/" + (1 + date.month) + "/" + date.date; data[key] = (data[key] || 0) + 1; } } while (query = result.nextQuery()); return data; } ??????????????????getTrendingData() ??????????????(??????????Nashorn ???????? OpenJDK ?????? 2012 ? 11 ? 21 ???)??????????????????????????????????? ????JavaFX ? BarChart ??????????? var javafx = Packages.javafx; var Stage = javafx.stage.Stage var Scene = javafx.scene.Scene; var Group = javafx.scene.Group; var Chart = javafx.scene.chart.Chart; var FXCollections = javafx.collections.FXCollections; var ObservableList = javafx.collections.ObservableList; var CategoryAxis = javafx.scene.chart.CategoryAxis; var NumberAxis = javafx.scene.chart.NumberAxis; var BarChart = javafx.scene.chart.BarChart; var XYChart = javafx.scene.chart.XYChart; var Series = javafx.scene.chart.XYChart.Series; var Data = javafx.scene.chart.XYChart.Data; function graph(stage, data) { var root = new Group(); stage.scene = new Scene(root); var dates = Object.keys(data); var xAxis = new CategoryAxis(); xAxis.categories = FXCollections.observableArrayList(dates); var yAxis = new NumberAxis("Tweets", 0.0, 200.0, 50.0); var series = FXCollections.observableArrayList(); for (var date in data) { series.add(new Data(date, data[date])); } var tweets = new Series("Tweets", series); var barChartData = FXCollections.observableArrayList(tweets); var chart = new BarChart(xAxis, yAxis, barChartData, 25.0); root.children.add(chart); } ????????????????????????????????stage.scene = new Scene(root) ? stage.setScene(new Scene(root)) ????????????????????Nashorn ? stage ??????? scene ???????????????????(Dynalink ?????????)Java Beans ???????????????? (setScene()) ???????????????????????????????Nashorn ? FXCollections ??????????????????????????????observableArrayList(dates) ??????????Nashorn ? JavaScript ??? (dates) ? Java ???????????????????????????? JavaScript ?????????????????? Java ????????????????????????????????????????????????????????????? ????????????????????????????????? JavaFX ???????????????????????? JavaFX ??????????????javafx.application.Application ??????????????????????????? JavaFX ????????????????????????????????????????????????? import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import javafx.application.Application; import javafx.stage.Stage; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TrendingMain extends Application { private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); private Trending trending; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { trending = (Trending) load("Trending.js"); trending.start(stage); } @Override public void stop() throws Exception { trending.stop(); } private Object load(String script) throws IOException, ScriptException { try (final InputStream is = TrendingMain.class.getResourceAsStream(script)) { return engine.eval(new InputStreamReader(is, "utf-8")); } } } ???? Nashorn ??????? JSR-223 ? javax.script ????????? private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); ????????? JavaScript ???????? Nashorn ???????????????????? load ???????????????????????engine ???????????????load ????????????? ???????????????Java ???????????????????????????????????????????????????? Java ????????????????JavaFX ???????? start ????? stop ?????????????????????????????????????? public interface Trending { public void start(Stage stage) throws Exception; public void stop() throws Exception; } ?????????????????????????????? function newTrending() { return new Packages.Trending() { start: function(stage) { var data = getTrendingData(); graph(stage, data); stage.show(); }, stop: function() { } } } newTrending(); ?????? Trending ?????????????????????start ????? stop ??????????????????????????????????? eval ???? Java ??????????????? trending = (Trending) load("Trending.js"); ????????????????Trending.js ??????? getTrendingData ???????????? newTrending ????????????????????? Java ?????????newTrending ????????? eval ????????? Trending ????????????????????????????????????????????????????????? trending.start(stage); ???????? ???? Nashorn ????????? http://www.myexpospace.com/JavaOne2012/SessionFiles/CON5251_PDF_5251_0001.pdf ???????? Dynalink ??????? https://github.com/szegedi/dynalink ????????

    Read the article

  • Using xsl:variable in a xsl:foreach select statment

    - by Nefariousity
    I'm trying to iterate through an xml document using xsl:foreach but I need the select=" " to be dynamic so I'm using a variable as the source. Here's what I've tried: ... <xsl:template name="SetDataPath"> <xsl:param name="Type" /> <xsl:variable name="Path_1">/Rating/Path1/*</xsl:variable> <xsl:variable name="Path_2">/Rating/Path2/*</xsl:variable> <xsl:if test="$Type='1'"> <xsl:value-of select="$Path_1"/> </xsl:if> <xsl:if test="$Type='2'"> <xsl:value-of select="$Path_2"/> </xsl:if> <xsl:template> ... <!-- Set Data Path according to Type --> <xsl:variable name="DataPath"> <xsl:call-template name="SetDataPath"> <xsl:with-param name="Type" select="/Rating/Type" /> </xsl:call-template> </xsl:variable> ... <xsl:for-each select="$DataPath"> ... The foreach threw an error stating: "XslTransformException - To use a result tree fragment in a path expression, first convert it to a node-set using the msxsl:node-set() function." When I use the msxsl:node-set() function though, my results are blank. I'm aware that I'm setting $DataPath to a string, but shouldn't the node-set() function be creating a node set from it? Am I missing something? When I don't use a variable: <xsl:for-each select="/Rating/Path1/*"> I get the proper results. Here's the XML data file I'm using: <Rating> <Type>1</Type> <Path1> <sarah> <dob>1-3-86</dob> <user>Sarah</user> </sarah> <joe> <dob>11-12-85</dob> <user>Joe</user> </joe> </Path1> <Path2> <jeff> <dob>11-3-84</dob> <user>Jeff</user> </jeff> <shawn> <dob>3-5-81</dob> <user>Shawn</user> </shawn> </Path2> </Rating> My question is simple, how do you run a foreach on 2 different paths?

    Read the article

  • Bluetooth RFCOMM / SDP connection to a RS232 adapter in android

    - by ThePosey
    Hello All, I am trying to use the Bluetooth Chat sample API app that google provides to connect to a bluetooth RS232 adapter hooked up to another device. Here is the app for reference: http://developer.android.com/resources/samples/BluetoothChat/index.html And here is the spec sheet for the RS232 connector just for reference: http://serialio.com/download/Docs/BlueSnap-guide-4.77_Commands.pdf Well the problem is that when I go to connect to the device with: mmSocket.connect(); (BluetoothSocket::connect()) I always get an IOException error thrown by the connect() method. When I do a toString on the exception I get "Service discovery failed". My question is mostly what are the cases that would cause an IOException to get thrown in the connect method? I know those are in the source somewhere but I don't know exactly how the java layer that you write apps in and the C/C++ layer that contains the actual stacks interface. I know that it uses the bluez bluetooth stack which is written in C/C++ but not sure how that ties into the java layer which is what I would think is throwing the exception. Any help on pointing me to where I can try to dissect this issue would be incredible. Also just to note I am able to pair with the RS232 adapter just fine but I am never able to actually connect. Here is the logcat output for more reference: I/ActivityManager( 1018): Displayed activity com.example.android.BluetoothChat/.DeviceListActivity: 326 ms (total 326 ms) E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) D/BluetoothChat( 1729): onActivityResult -1 D/BluetoothChatService( 1729): connect to: 00:06:66:03:0C:51 D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_CONNECTING E/BluetoothChat( 1729): + ON RESUME + I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_CONNECTING I/BluetoothChatService( 1729): BEGIN mConnectThread E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 I/BluetoothChatService( 1729): CONNECTION FAIL TOSTRING: java.io.IOException: Service discovery failed D/BluetoothChatService( 1729): setState() STATE_CONNECTING - STATE_LISTEN D/BluetoothChatService( 1729): start D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_LISTEN I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID I/NotificationService( 1018): enqueueToast pkg=com.example.android.BluetoothChat callback=android.app.ITransientNotification$Stub$Proxy@446327c8 duration=0 I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID The device I'm trying to connect to is the 00:06:66:03:0C:51 which I can scan for and apparently pair with just fine. The below is merged from a similar question which was successfully resolved by the selected answer here: How can one connect to an rfcomm device other than another phone in Android? The Android API provides examples of using listenUsingRfcommWithServiceRecord() to set up a socket and createRfcommSocketToServiceRecord() to connect to that socket. I'm trying to connect to an embedded device with a BlueSMiRF Gold chip. My working Python code (using the PyBluez library), which I'd like to port to Android, is as follows: sock = bluetooth.BluetoothSocket(proto=bluetooth.RFCOMM) sock.connect((device_addr, 1)) return sock.makefile() ...so the service to connect to is simply defined as channel 1, without any SDP lookup. As the only documented mechanism I see in the Android API does SDP lookup of a UUID, I'm slightly at a loss. Using "sdptool browse" from my Linux host comes up empty, so I surmise that the chip in question simply lacks SDP support.

    Read the article

  • using Silverlight 3's HtmlPage.Window.Navigate method to reuse an already open browser window

    - by Phil
    Hi, I want to use an external browser window to implement a preview functionality in a silverlight application. There is a list of items and whenever the user clicks one of these items, it's opened in a separate browser window (the content is a pdf document, which is why it is handled ouside of the SL app). Now, to achieve this, I simply use HtmlPage.Window.Navigate(new Uri("http://www.bing.com")); which works fine. Now my client doesn't like the fact that every click opens up a new browser window. He would like to see the browser window reused every time an item is clicked. So I went out and tried implementing this: Option 1 - Use the overload of the Navigate method, like so: HtmlPage.Window.Navigate(new Uri("http://www.bing.com"), "foo"); I was assuming that the window would be reused when the same target parameter value (foo) would be used in subsequent calls. This does not work. I get a new window every time. Option 2 - Use the PopupWindow method on the HtmlPage HtmlPage.PopupWindow(new Uri("http://www.bing.com"), "blah", new HtmlPopupWindowOptions()); This does not work. I get a new window every time. Option 3 - Get a handle to the opened window and reuse that in subsequent calls private HtmlWindow window; private void navigationButton_Click(object sender, RoutedEventArgs e) { if (window == null) window = HtmlPage.Window.Navigate(new Uri("http://www.bing.com"), "blah"); else window.Navigate(new Uri("http://www.bing.com"), "blah"); if (window == null) MessageBox.Show("it's null"); } This does not work. I tried the same for the PopupWindow() method and the window is null every time, so a new window is opened on every click. I have checked both the EnableHtmlAccess and the IsPopupWindowAllowed properties, and they return true, as they should. Option 4 - Use Eval method to execute some custom javascript private const string javascript = @"var popup = window.open('', 'blah') ; if(popup.location != 'http://www.bing.com' ){ popup.location = 'http://www.bing.com'; } popup.focus();"; private void navigationButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Eval(javascript); } This does not work. I get a new window every time. option 5 - Use CreateInstance to run some custom javascript on the page private void navigationButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.CreateInstance("thisIsPlainHell"); } and in my aspx I have function thisIsPlainHell() { var popup = window.open('http://www.bing.com', 'blah'); popup.focus(); } Guess what? This does work. The only thing is that the window behaves a little strange and I'm not sure why: I'm behind a proxy and in all other scenarios I'm being prompted for my password. In this case however I am not (and am thus not able to reach the external site - bing in this case). This is not really a huge issue atm, but I just don't understand what's goign on here. Whenever I type another url in the address bar of the popup window (eg www.google.com) and press enter, it opens up another window and prompts me for my proxy password. As a temporary solution option 5 could do, but I don't like the fact that Silverlight is not able to manage this. One of the main reasons my client has opted for Silverlight is to be protected against all the browser specific hacking that comes with javascript. Am I doing something wrong? I'm definitely no javascript expert, so I'm hoping it's something obvious I'm missing here. Cheers, Phil

    Read the article

  • WPF Styles and Tooltips Question

    - by A.R.
    I have a style that I am using to make dynamic tooltips on certain text boxes like so. <Style TargetType="{x:Type TextBox}"> <Setter Property="MinWidth" Value="100"/> <Style.Triggers> <Trigger Property="Validation.HasError" Value="True"> <!-- item of interest --> <Setter Property="ToolTip"> <Setter.Value> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <Binding RelativeSource="{RelativeSource Self}" Path="Tag"/> </MultiBinding> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> This works very well, but if I want to use a more complex tooltip I can't figure out how to bind to 'Tag' anymore for the converter value. For example; ... <Setter Property="ToolTip"> <Setter.Value> <StackPanel> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <!-- item of interest --> <Binding RelativeSource=" what goes here?? "/> </MultiBinding> </TextBlock.Text> </TextBlock> <Image/> </StackPanel> </Setter.Value> </Setter> ... I have tried several flavors of 'FindAncestor' and what not for the relative source, but I can't get anything to work. Any ideas?? UPDATE: 12-29-2010 : Here is the correct code, answer provided by our friend Goblin below. Works perfectly! ... <Setter Property="ToolTip"> <Setter.Value> <!-- Item of interest --> <ToolTip DataContext="{Binding Path=PlacementTarget, RelativeSource={x:Static RelativeSource.Self}}"> <StackPanel> <Image/> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <Binding Path="Tag"/> </MultiBinding> </TextBlock.Text> </TextBlock> </StackPanel> </ToolTip> </Setter.Value> </Setter> ...

    Read the article

  • NHibernate which cache to use for WinForms application

    - by chiccodoro
    I have a C# WinForms application with a database backend (oracle) and use NHibernate for O/R mapping. I would like to reduce communication to the database as much as possible since the network in here is quite slow, so I read about second level caching. I found this quite good introduction, which lists the following available cache implementations. I'm wondering which implementation I should use for my application. The caching should be simple, it should not significantly slow down the first occurrence of a query, and it should not take much memory to load the implementing assemblies. (With NHibernate and Castle, the application already takes up to 80 MB of RAM!) Velocity: uses Microsoft Velocity which is a highly scalable in-memory application cache for all kinds of data. Prevalence: uses Bamboo.Prevalence as the cache provider. Bamboo.Prevalence is a .NET implementation of the object prevalence concept brought to life by Klaus Wuestefeld in Prevayler. Bamboo.Prevalence provides transparent object persistence to deterministic systems targeting the CLR. It offers persistent caching for smart client applications. SysCache: Uses System.Web.Caching.Cache as the cache provider. This means that you can rely on ASP.NET caching feature to understand how it works. SysCache2: Similar to NHibernate.Caches.SysCache, uses ASP.NET cache. This provider also supports SQL dependency-based expiration, meaning that it is possible to configure certain cache regions to automatically expire when the relevant data in the database changes. MemCache: uses memcached; memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Basically a distributed hash table. SharedCache: high-performance, distributed and replicated memory object caching system. See here and here for more info My considerations so far were: Velocity seems quite heavyweight and overkill (the files totally take 467 KB of disk space, haven't measured the RAM it takes so far because I didn't manage to make it run, see below) Prevalence, at least in my first attempt, slowed down my query from ~0.5 secs to ~5 secs, and caching didn't work (see below) SysCache seems to be for ASP.NET, not for winforms. MemCache and SharedCache seem to be for distributed scenarios. Which one would you suggest me to use? There would also be a built-in implementation, which of course is very lightweight, but the referenced article tells me that I "(...) should never use this cache provider for production code but only for testing." Besides the question which fits best into my situation I also faced problems with applying them: Velocity complained that "dcacheClient" tag not specified in the application configuration file. Specify valid tag in configuration file," although I created an app.config file for the assembly and pasted the example from this article. Prevalence, as mentioned above, heavily slowed down my first query, and the next time the exact same query was executed, another select was sent to the database. Maybe I should "externalize" this topic into another post. I will do that if someone tells me it is absolutely unusual that a query is slowed down so much and he needs further details to help me.

    Read the article

  • This file does not have a program associated with it for performing

    - by Abu Hamzah
    update 2: HKEY_CLASSES_ROOT\folder ContentViewModeLayoutPatternForBrowse REG_SZ delta ContentViewModeForBrowse REG_SZ prop:~System.ItemNameDisplay;~System.LayoutPattern.PlaceHolder;~System.LayoutPattern.PlaceHolder;~System.LayoutPattern.PlaceHolder;System.DateModified ContentViewModeLayoutPatternForSearch REG_SZ alpha ContentViewModeForSearch REG_SZ prop:~System.ItemNameDisplay;System.DateModified;~System.ItemFolderPathDisplay (Default) REG_SZ Folder EditFlags REG_BINARY D2030000 FullDetails REG_SZ prop:System.PropGroup.Description;System.ItemNameDisplay;System.ItemTypeText;System.Size NoRecentDocs REG_SZ ThumbnailCutoff REG_DWORD 0x0 TileInfo REG_SZ prop:System.Title;System.ItemTypeText HKEY_CLASSES_ROOT\folder\DefaultIcon (Default) REG_EXPAND_SZ %SystemRoot%\System32\shell32.dll,3 HKEY_CLASSES_ROOT\folder\shell HKEY_CLASSES_ROOT\folder\shell\explore HKEY_CLASSES_ROOT\folder\shell\explore\command HKEY_CLASSES_ROOT\folder\shell\open MultiSelectModel REG_SZ Document HKEY_CLASSES_ROOT\folder\shell\open\command DelegateExecute REG_SZ {11dbb47c-a525-400b-9e80-a54615a090c0} (Default) REG_EXPAND_SZ %SystemRoot%\Explorer.exe HKEY_CLASSES_ROOT\folder\shell\opennewprocess MUIVerb REG_SZ @shell32.dll,-8518 MultiSelectModel REG_SZ Document Extended REG_SZ LaunchExplorerFlags REG_DWORD 0x3 ExplorerHost REG_SZ {ceff45ee-c862-41de-aee2-a022c81eda92} HKEY_CLASSES_ROOT\folder\shell\opennewprocess\command DelegateExecute REG_SZ {11dbb47c-a525-400b-9e80-a54615a090c0} HKEY_CLASSES_ROOT\folder\shell\opennewwindow MUIVerb REG_SZ @shell32.dll,-8517 MultiSelectModel REG_SZ Document OnlyInBrowserWindow REG_SZ LaunchExplorerFlags REG_DWORD 0x1 HKEY_CLASSES_ROOT\folder\shell\opennewwindow\command DelegateExecute REG_SZ {11dbb47c-a525-400b-9e80-a54615a090c0} HKEY_CLASSES_ROOT\folder\ShellEx HKEY_CLASSES_ROOT\folder\ShellEx\ColumnHandlers HKEY_CLASSES_ROOT\folder\ShellEx\ColumnHandlers\{0561EC90-CE54-4f0c-9C55-E226110A740C} (Default) REG_SZ Haali Column Provider HKEY_CLASSES_ROOT\folder\ShellEx\ColumnHandlers\{F9DB5320-233E-11D1-9F84-707F02C10627} (Default) REG_SZ PDF Column Info HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\Adobe.Acrobat.ContextMenu (Default) REG_SZ {D25B2CAB-8A9A-4517-A9B2-CB5F68A5A802} HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\BriefcaseMenu (Default) REG_SZ {85BBD920-42A0-1069-A2E4-08002B30309D} HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\ESET Smart Security - Context Menu Shell Extension (Default) REG_SZ {B089FE88-FB52-11D3-BDF1-0050DA34150D} HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\LavasoftShellExt (Default) REG_SZ {DCE027F7-16A4-4BEE-9BE7-74F80EE3738F} HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\Library Location (Default) REG_SZ {3dad6c5d-2167-4cae-9914-f99e41c12cfa} HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\MagicISO (Default) REG_SZ {DB85C504-C730-49DD-BEC1-7B39C6103B7A} HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\MBAMShlExt (Default) REG_SZ {57CE581A-0CB6-4266-9CA0-19364C90A0B3} HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\WinRAR (Default) REG_SZ {B41DB860-8EE4-11D2-9906-E49FADC173CA} HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\WS_FTP (Default) REG_SZ {797F3885-5429-11D4-8823-0050DA59922B} HKEY_CLASSES_ROOT\folder\ShellEx\ContextMenuHandlers\XXX Groove GFS Context Menu Handler XXX (Default) REG_SZ {6C467336-8281-4E60-8204-430CED96822D} HKEY_CLASSES_ROOT\folder\ShellEx\DragDropHandlers HKEY_CLASSES_ROOT\folder\ShellEx\DragDropHandlers\WinRAR (Default) REG_SZ {B41DB860-8EE4-11D2-9906-E49FADC173CA} HKEY_CLASSES_ROOT\folder\ShellEx\DragDropHandlers\{BD472F60-27FA-11cf-B8B4-444553540000} (Default) REG_SZ HKEY_CLASSES_ROOT\folder\ShellEx\PropertySheetHandlers HKEY_CLASSES_ROOT\folder\ShellEx\PropertySheetHandlers\BriefcasePage (Default) REG_SZ {85BBD920-42A0-1069-A2E4-08002B30309D} HKEY_CLASSES_ROOT\folder\ShellNew Directory REG_SZ IconPath REG_EXPAND_SZ %SystemRoot%\system32\shell32.dll,3 ItemName REG_SZ @shell32.dll,-30396 MenuText REG_SZ @shell32.dll,-30317 NonLFNFileSpec REG_SZ @shell32.dll,-30319 HKEY_CLASSES_ROOT\folder\ShellNew\Config AllDrives REG_SZ IsFolder REG_SZ NoExtension REG_SZ update: HKEY_CLASSES_ROOT\Directory AlwaysShowExt REG_SZ (Default) REG_SZ File Folder EditFlags REG_BINARY D2010000 FriendlyTypeName REG_SZ @shell32.dll,-10152 FullDetails REG_SZ prop:System.PropGroup.Description;System.DateCreated;System.FileCount;System.TotalFileSize InfoTip REG_SZ prop:System.Comment;System.DateCreated NoRecentDocs REG_SZ PreviewDetails REG_SZ prop:System.DateModified;*System.SharedWith;*System.OfflineAvailability;*System.OfflineStatus PreviewTitle REG_SZ prop:System.ItemNameDisplay;System.ItemTypeText HKEY_CLASSES_ROOT\Directory\Background HKEY_CLASSES_ROOT\Directory\Background\shell HKEY_CLASSES_ROOT\Directory\Background\shell\cmd (Default) REG_SZ @shell32.dll,-8506 Extended REG_SZ NoWorkingDirectory REG_SZ HKEY_CLASSES_ROOT\Directory\Background\shell\cmd\command (Default) REG_SZ cmd.exe /s /k pushd "%V" HKEY_CLASSES_ROOT\Directory\Background\shellex HKEY_CLASSES_ROOT\Directory\Background\shellex\ContextMenuHandlers HKEY_CLASSES_ROOT\Directory\Background\shellex\ContextMenuHandlers\Gadgets (Default) REG_SZ {6B9228DA-9C15-419e-856C-19E768A13BDC} HKEY_CLASSES_ROOT\Directory\Background\shellex\ContextMenuHandlers\igfxcui (Default) REG_SZ {3AB1675A-CCFF-11D2-8B20-00A0C93CB1F4} HKEY_CLASSES_ROOT\Directory\Background\shellex\ContextMenuHandlers\New (Default) REG_SZ {D969A300-E7FF-11d0-A93B-00A0C90F2719} HKEY_CLASSES_ROOT\Directory\Background\shellex\ContextMenuHandlers\Sharing (Default) REG_SZ {f81e9010-6ea4-11ce-a7ff-00aa003ca9f6} HKEY_CLASSES_ROOT\Directory\Background\shellex\ContextMenuHandlers\XXX Groove GFS Context Menu Handler XXX (Default) REG_SZ {6C467336-8281-4E60-8204-430CED96822D} HKEY_CLASSES_ROOT\Directory\DefaultIcon (Default) REG_EWindows Windows XPAND_SZ %SystemRoot%\System32\shell32.dll,3 HKEY_CLASSES_ROOT\Directory\shell (Default) REG_SZ none HKEY_CLASSES_ROOT\Directory\shell\cmd (Default) REG_SZ @shell32.dll,-8506 Extended REG_SZ NoWorkingDirectory REG_SZ HKEY_CLASSES_ROOT\Directory\shell\cmd\command (Default) REG_SZ cmd.exe /s /k pushd "%V" HKEY_CLASSES_ROOT\Directory\shell\find LegacyDisable REG_SZ SuppressionPolicy REG_DWORD 0x80 HKEY_CLASSES_ROOT\Directory\shell\find\command (Default) REG_EWindows Windows XPAND_SZ %SystemRoot%\EWindows Windows XPlorer.exe DelegateExecute REG_SZ {a015411a-f97d-4ef3-8425-8a38d022aebc} HKEY_CLASSES_ROOT\Directory\shell\find\ddeexec (Default) REG_SZ [FindFolder("%l", %I)] NoActivateHandler REG_SZ HKEY_CLASSES_ROOT\Directory\shell\find\ddeexec\application (Default) REG_SZ Folders HKEY_CLASSES_ROOT\Directory\shell\find\ddeexec\topic (Default) REG_SZ AppProperties HKEY_CLASSES_ROOT\Directory\shell\OneNote.Open (Default) REG_SZ Open as Notebook in OneNote HKEY_CLASSES_ROOT\Directory\shell\OneNote.Open\Command (Default) REG_SZ C:\PROGRA~1\Microsoft Office\Office12\ONENOTE.EXE "%L" HKEY_CLASSES_ROOT\Directory\shellex HKEY_CLASSES_ROOT\Directory\shellex\ContextMenuHandlers HKEY_CLASSES_ROOT\Directory\shellex\ContextMenuHandlers\CuteFTP 8 Professional (Default) REG_SZ {8f7261d0-d2b9-11d2-9909-00605205b24c} HKEY_CLASSES_ROOT\Directory\shellex\ContextMenuHandlers\EncryptionMenu (Default) REG_SZ {A470F8CF-A1E8-4f65-8335-227475AA5C46} HKEY_CLASSES_ROOT\Directory\shellex\ContextMenuHandlers\MagicISO (Default) REG_SZ {DB85C504-C730-49DD-BEC1-7B39C6103B7A} HKEY_CLASSES_ROOT\Directory\shellex\ContextMenuHandlers\Sharing (Default) REG_SZ {f81e9010-6ea4-11ce-a7ff-00aa003ca9f6} HKEY_CLASSES_ROOT\Directory\shellex\ContextMenuHandlers\ShellExtension HKEY_CLASSES_ROOT\Directory\shellex\ContextMenuHandlers\WinRAR (Default) REG_SZ {B41DB860-8EE4-11D2-9906-E49FADC173CA} HKEY_CLASSES_ROOT\Directory\shellex\ContextMenuHandlers\XXX Groove GFS Context Menu Handler XXX (Default) REG_SZ {6C467336-8281-4E60-8204-430CED96822D} HKEY_CLASSES_ROOT\Directory\shellex\ContextMenuHandlers\{596AB062-B4D2-4215-9F74-E9109B0A8153} HKEY_CLASSES_ROOT\Directory\shellex\CopyHookHandlers HKEY_CLASSES_ROOT\Directory\shellex\CopyHookHandlers\FileSystem (Default) REG_SZ {217FC9C0-3AEA-1069-A2DB-08002B30309D} HKEY_CLASSES_ROOT\Directory\shellex\CopyHookHandlers\Sharing (Default) REG_SZ {40dd6e20-7c17-11ce-a804-00aa003ca9f6} HKEY_CLASSES_ROOT\Directory\shellex\DragDropHandlers HKEY_CLASSES_ROOT\Directory\shellex\DragDropHandlers\WinRAR (Default) REG_SZ {B41DB860-8EE4-11D2-9906-E49FADC173CA} HKEY_CLASSES_ROOT\Directory\shellex\DragDropHandlers\WS_FTP (Default) REG_SZ {1D83C7B3-C931-4850-BED0-D3FE8B3F5808} HKEY_CLASSES_ROOT\Directory\shellex\PropertySheetHandlers HKEY_CLASSES_ROOT\Directory\shellex\PropertySheetHandlers\Sharing (Default) REG_SZ {f81e9010-6ea4-11ce-a7ff-00aa003ca9f6} HKEY_CLASSES_ROOT\Directory\shellex\PropertySheetHandlers\{1f2e5c40-9550-11ce-99d2-00aa006e086c} HKEY_CLASSES_ROOT\Directory\shellex\PropertySheetHandlers\{4a7ded0a-ad25-11d0-98a8-0800361b1103} HKEY_CLASSES_ROOT\Directory\shellex\PropertySheetHandlers\{596AB062-B4D2-4215-9F74-E9109B0A8153} HKEY_CLASSES_ROOT\Directory\shellex\PropertySheetHandlers\{ECCDF543-45CC-11CE-B9BF-0080C87CDBA6} HKEY_CLASSES_ROOT\Directory\shellex\PropertySheetHandlers\{ef43ecfe-2ab9-4632-bf21-58909dd177f0} (Default) REG_SZ I updated my IE9 from IE8 and after I reboot my machine and try to access my computer drive and I get this error message whenever I try to double click c:\ drive or other drives but other than that everything seems to be working fince except that I can not access my drives.... its very strange any help? <<<This file does not have a program associated with it for performing this action. Please install a program or, if one is already installed, create an association in the Default Programs control panel>>> using Windows 7 32 bit

    Read the article

  • VS2010 Assembly Load Error

    - by Nate
    I am getting the following error when I try to build an ASP.NET 4 project in Visual Studio 2010: "Could not load file or assembly 'file:///C:\Dev\project\trunk\bin\Elmah.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515)". I have verified that the dll does, in fact, exist, and is getting copied to the bin folder correctly. I have also tried removing and then re-adding the reference to the project. The build only fails when I switch the Solution Configuration to "Release". It does not fail when the Solution Configuration is set to "Debug". The only difference between the two configurations (that I know of) is shown in the following Web.config transform, Web.Release.config: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="SqlServer" connectionString="" providerName="System.Data.SqlClient" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors mode="On" xdt:Transform="Replace"> <error statusCode="404" redirect="lost.htm" /> <error statusCode="500" redirect="uhoh.htm" /> </customErrors> </system.web> </configuration> I have tried using Fusion Log Viewer to track down the assembly binding issue, but it looks like it is finding and loading the assembly correctly. Here is the log: *** Assembly Binder Log Entry (6/8/2010 @ 10:01:54 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll Running under executable c:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\NETFX 4.0 Tools\sgen.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = User LOG: Where-ref bind. Location = C:\Dev\project\trunk\bin\Elmah.dll LOG: Appbase = file:///c:/Program Files (x86)/Microsoft SDKs/Windows/v7.0A/bin/NETFX 4.0 Tools/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = sgen.exe Calling assembly : (Unknown). === LOG: This bind starts in LoadFrom load context. WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load(). LOG: No application configuration file found. LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config. LOG: Attempting download of new URL file:///C:/Dev/project/trunk/bin/Elmah.dll. LOG: Assembly download was successful. Attempting setup of file: C:\Dev\project\trunk\bin\Elmah.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: Elmah, Version=1.1.11517.0, Culture=neutral, PublicKeyToken=null LOG: Re-apply policy for where-ref bind. LOG: Where-ref bind Codebase does not match what is found in default context. Keep the result in LoadFrom context. LOG: Binding succeeds. Returns assembly from C:\Dev\project\trunk\bin\Elmah.dll. LOG: Assembly is loaded in LoadFrom load context. I feel like there is a fundamental lack of understanding on my part as to what exactly is going on here. Any explanation/help is much appreciated!

    Read the article

  • Facebook require_login() in iFrame App

    - by LapKom
    Hi, I have serious problem with iframe application. I need to use many external JS libraries and other dynamic stuuf so FMBL application can't be done. When I call require_login() I get applicaition installing dialog when app is not already installed, which is ok. But then after authorization application enters an endless redirect loop with parameters like auth_token, installed and so. Yesterday I managed to fix this, but today it's broken again... What the heck is happening with FB? It's driving me crazy to find a sollution, none of ones found on net doesn't seem to be working. So far I tried: http://abhirama.wordpress.com/2010/03/07/facebook-iframe-xfbml-app/ (7th march 2010!) http://forum.developers.facebook.com/viewtopic.php?pid=156092 http://www.keywordintellect.com/facebook-development/how-to-set-up-a-facebook-iframe-application-in-php-in-5-minutes/ http://www.markdeepwell.com/2010/02/validating-a-facebook-session-within-an-iframe/ http://forum.developers.facebook.com/viewtopic.php?pid=210449 http://www.ajaxlines.com/ajax/stuff/article/facebook_fbml_rendering_in_iframe_application.php http://www.aratide.com/php/solving-the-break-out-issue-in-iframe-facebook-applications/ None of the above worked... According to those and some FB docs: http://wiki.developers.facebook.com/index.php/FB_RequireFeatures http://wiki.developers.facebook.com/index.php/Cross_Domain_Communication_Channel My example test files look as follow: <?php //Link in library. require_once '../application/vendor/Facebook/facebook.php'; //Authentication Keys $appapikey = 'XXXX'; $appsecret = 'XXXX'; //Construct the class $facebook = new Facebook($appapikey, $appsecret); //Require login $user_id = $facebook->require_login(); ?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <title></title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> This is you: <fb:name uid="<?php echo $user_id?>"></fb:name> <?php var_dump($facebook->$this->facebook->api_client->friends_get())?> <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function(){ FB.Facebook.init("<?=$appapikey?>", "xd_receiver.html"); }); </script> </body> </html> And cross-domain file xd_receiver.html is: <!doctype html public "-//w3c//dtd xhtml 1.0 strict//en" "http://www.w3.org/tr/xhtml1/dtd/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>cross-domain receiver page</title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/XdCommReceiver.js" type="text/javascript"></script> </body> </html> How do I get it working? I'm using Kohana framework to do this and already replaced header('Location') with url::redirect() in facebook php library.

    Read the article

  • Why does my sharepoint web part event handler lose the sender value on postback?

    - by vishal shah
    I have a web part which is going to be a part of pair of connected web parts. For simplicity, I am just describing the consumer web part. This web part has 10 link buttons on it. And they are rendered in the Render method instead ofCreateChildControls as this webpart will be receiving values based on input from the provider web part. Each Link Button has a text which is decided dynamically based on the input from provider web part. When I click on any of the Link Buttons, the event handler is triggered but the text on the Link Button shows up as the one set in CreateChildControls. When I trace the code, I see that the CreateChildControls gets called before the event handler (and i think that resets my Link Buttons). How do I get the event handler to show me the dynamic text instead? Here is the code... public class consWebPart : Microsoft.SharePoint.WebPartPages.WebPart { private bool _error = false; private LinkButton[] lkDocument = null; public consWebPart() { this.ExportMode = WebPartExportMode.All; } protected override void CreateChildControls() { if (!_error) { try { base.CreateChildControls(); lkDocument = new LinkButton[101]; for (int i = 0; i < 10; i++) { lkDocument[i] = new LinkButton(); lkDocument[i].ID = "lkDocument" + i; lkDocument[i].Text = "Initial Text"; lkDocument[i].Style.Add("margin", "10 10 10 10px"); this.Controls.Add(lkDocument[i]); lkDocument[i].Click += new EventHandler(lkDocument_Click); } } catch (Exception ex) { HandleException(ex); } } } protected override void Render(HtmlTextWriter writer) { writer.Write("<table><tr>"); for (int i = 0; i < 10; i++) { writer.Write("<tr>"); lkDocument[i].Text = "LinkButton" + i; writer.Write("<td>"); lkDocument[i].RenderControl(writer); writer.Write("</td>"); writer.Write("</tr>"); } writer.Write("</table>"); } protected void lkDocument_Click(object sender, EventArgs e) { string strsender = sender.ToString(); LinkButton lk = (LinkButton)sender; } protected override void OnLoad(EventArgs e) { if (!_error) { try { base.OnLoad(e); this.EnsureChildControls(); } catch (Exception ex) { HandleException(ex); } } } private void HandleException(Exception ex) { this._error = true; this.Controls.Clear(); this.Controls.Add(new LiteralControl(ex.Message)); } }

    Read the article

  • Cisco PIX 8.0.4, static address mapping not working?

    - by Bill
    upgrading a working Pix running 5.3.1 to 8.0.4. The memory/IOS upgrade went fine, but the 8.0.4 configuration is not quite working 100%. The 5.3.1 config on which it was based is working fine. Basically, I have three networks (inside, outside, dmz) with some addresses on the dmz statically mapped to outside addresses. The problem seems to be that those addresses can't send or receive traffic from the outside (Internet.) Stuff on the DMZ that does not have a static mapping seems to work fine. So, basically: Inside - outside: works Inside - DMZ: works DMZ - inside: works, where the rules allow it DMZ (non-static) - outside: works But: DMZ (static) - outside: fails Outside - DMZ: fails (So, say, udp 1194 traffic to .102, http to .104) I suspect there's something I'm missing with the nat/global section of the config, but can't for the life of me figure out what. Help, anyone? The complete configuration is below. Thanks for any thoughts! ! PIX Version 8.0(4) ! hostname firewall domain-name asasdkpaskdspakdpoak.com enable password xxxxxxxx encrypted passwd xxxxxxxx encrypted names ! interface Ethernet0 nameif outside security-level 0 ip address XX.XX.XX.100 255.255.255.224 ! interface Ethernet1 nameif inside security-level 100 ip address 192.168.68.1 255.255.255.0 ! interface Ethernet2 nameif dmz security-level 10 ip address 192.168.69.1 255.255.255.0 ! boot system flash:/image.bin ftp mode passive dns server-group DefaultDNS domain-name asasdkpaskdspakdpoak.com access-list acl_out extended permit udp any host XX.XX.XX.102 eq 1194 access-list acl_out extended permit tcp any host XX.XX.XX.104 eq www access-list acl_dmz extended permit tcp host 192.168.69.10 host 192.168.68.17 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq 5901 access-list acl_dmz extended permit udp host 192.168.69.103 any eq ntp access-list acl_dmz extended permit udp host 192.168.69.103 any eq domain access-list acl_dmz extended permit tcp host 192.168.69.103 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8080 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8099 access-list acl_dmz extended permit tcp host 192.168.69.105 any eq www access-list acl_dmz extended permit tcp host 192.168.69.103 any eq smtp access-list acl_dmz extended permit tcp host 192.168.69.105 host 192.168.68.103 eq ssh access-list acl_dmz extended permit tcp host 192.168.69.104 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq https pager lines 24 mtu outside 1500 mtu inside 1500 mtu dmz 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 static (dmz,outside) XX.XX.XX.103 192.168.69.11 netmask 255.255.255.255 static (inside,dmz) 192.168.68.17 192.168.68.17 netmask 255.255.255.255 static (inside,dmz) 192.168.68.100 192.168.68.100 netmask 255.255.255.255 static (inside,dmz) 192.168.68.101 192.168.68.101 netmask 255.255.255.255 static (inside,dmz) 192.168.68.102 192.168.68.102 netmask 255.255.255.255 static (inside,dmz) 192.168.68.103 192.168.68.103 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.104 192.168.69.100 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.105 192.168.69.105 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.102 192.168.69.10 netmask 255.255.255.255 access-group acl_out in interface outside access-group acl_dmz in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.97 1 route dmz 10.71.83.0 255.255.255.0 192.168.69.10 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet 192.168.68.17 255.255.255.255 inside telnet timeout 5 ssh timeout 5 console timeout 0 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! service-policy global_policy global prompt hostname context Cryptochecksum:2d1bb2dee2d7a3e45db63a489102d7de

    Read the article

  • ModalPopupExtender z-index value decreases after every show

    - by ryanulit
    This is a new one I have never seen before. I have a gridview containing a bunch of categories that can be edited by clicking on the respective "Edit" link within the gridview. The modalpopupextender is then shown programmatically (.show() method) and the user is allowed to edit the category. Then the modal popup is programmtically hidden (.hide() method) when the user presses "Update" or "Cancel". For some reason after every new show of the modal popup, the z-index is decreasing by 1000 until it is hidden behind everything on my page. It starts at 7000 for the very first show. Therefore the user would not be able to edit an infinite number of categories if they wanted to. Any ideas why this is happening? Css class used on modalpopupextender: DIV.box-pop { border: #95aab9 1px solid; background-color: #ECF1F5; display: block; position: relative; margin: -6px 6px 6px -6px; padding: 4px; z-index: 10000; } Panel used for popup: <asp:Panel ID="PanelModify" runat="server" Width="250px" CssClass="box-pop"> <asp:UpdatePanel ID="UpdatePanelModify" runat="server" UpdateMode="Conditional"> <ContentTemplate> <table width="100%" cellpadding="3" cellspacing="3"> <tr> <td> <div class="box"> <h1> <span><strong> <asp:Literal ID="LiteralModalTitle" runat="server" /></strong></span> </h1> <table border="0" width="100%"> <tr> <td> <asp:TextBox ID="TextBoxModifiedText" runat="server" Width="173px" ValidationGroup="Modify"></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidatorModifiedText" runat="server" ValidationGroup="Modify" ErrorMessage="*" ControlToValidate="TextBoxModifiedText" Display="Dynamic"> </asp:RequiredFieldValidator> </td> </tr> <tr> <td> <asp:Button ID="ButtonUpdate" runat="server" Text="Update" ValidationGroup="Modify" /><asp:Button ID="ButtonInsert" runat="server" Text="Insert" ValidationGroup="Modify" /> &nbsp; <asp:Button ID="ButtonCancel" runat="server" Text="Cancel" CausesValidation="false" /> </td> </tr> </table> </div> </td> </tr> </table> </ContentTemplate> </asp:UpdatePanel> </asp:Panel> <ajaxToolkit:ModalPopupExtender ID="ModalPopupExtenderModify" runat="server" PopupControlID="PanelModify" TargetControlID="ButtonHideModify" BackgroundCssClass="modalBackground"> </ajaxToolkit:ModalPopupExtender> <asp:Button ID="ButtonHideModify" runat="server" Style="display: none;" />

    Read the article

  • localhost/phpmyadmin pulls blank page

    - by Atul Modi
    When I tried configuring local machine as a Internet Gateway with website development capabilities over it and I installed all required software into it. I also had disable the selinux into it. But PROBLEM is when I do http://localhost/phpMyAdmin or all lower case than the page shows it as a blank page. I am pasting code from httpd.conf file into this as well as from phpMyAdmin.conf file I am using Fedora 16 for this. httpd.conf ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule dbd_module modules/mod_dbd.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost UseCanonicalName Off DocumentRoot "/var/www/html" Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all UserDir disabled DirectoryIndex index.html index.htm index.php AccessFileName .htaccess Order allow,deny Deny from all Satisfy All TypesConfig /etc/mime.types DefaultType text/plain MIMEMagicFile conf/magic HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common LogFormat "%{Referer}i - %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif ReadmeName README.html HeaderName HEADER.html IndexIgnore .??* *~ # HEADER README* RCS CVS *,v *,t AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType application/x-tar .tgz AddType application/x-httpd-php .php AddType application/x-httpd-php .xml AddHandler application/x-httpd-php .xml AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml Alias /error/ "/var/www/error/" AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en ForceLanguagePriority Prefer Fallback ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4.0" force-response-1.0 BrowserMatch "Java/1.0" force-response-1.0 BrowserMatch "JDK/1.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully Order allow,deny Allow from all # phpMyAdmin.conf Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Can anyone help into this area please. Urgent reply will be appreciatable because i am struggling since one and half month for this matter. thank you, Atul

    Read the article

  • Partial view links not working in Fire Fox

    - by user329540
    I have a MVC4 asp.net application, I have two layouts a main layout for the main page and a second layout for the nested pages. The problem I have is with the second layout, on this layout I call a partial view which has my navigation links. In IE the navigation menu displays fine and when each item is clicked it navigates as expected. However in FF when the page renders the navigation bar is displayed but it has no 'click functionality' if you will its as if its simply text. My layout of nested page: <header> <img src="../../Images/fronttop.png" id="nestedPageheader" alt="Background Img"/> <div class="content-wrapper"> <section > <nav> <div id="navcontainer"> </div> </nav> </section> <div> </header> The script to retreive partial view and information for dynamic links on layout page. <script type="text/javascript"> var menuLoaded = false; $(document).ready(function () { if($('#navcontainer')[0].innerHTML.trim() == "") { $.ajax({ url: "@Url.Content("~/Home/MenuLayout")", type: "GET", success: function (response, status, xhr) { var nvContainer = $('#navcontainer'); nvContainer.html(response); menuLoaded = true; }, error: function (XMLHttpRequest, textStatus, errorThrown) { var nvContainer = $('#navcontainer'); nvContainer.html(errorThrown); } }); } }); </script> May partial view: @model Mscl.OpCost.Web.Models.stuffmodel <div class="menu"> <ul> <li><a>@Html.ActionLink("Home", "Index", "Home")</a></li> <li><a>@Html.ActionLink("some stuff", "stuffs", "stuff")</a></li> <li> <h5><a><span>somestuff</span></a></h5> <ul> <li><a>stuffs1s</a> <ul> @foreach (var image in Model.stuffs.Where(g => g.Grouping == 1)) { <li> <a>@Html.ActionLink(image.Title, "stuffs", "stuff", new { Id = image.CategoryId }, null)</a> </li> } </ul> </li> </ul> </il> </ul> </div> I need to know why this works fine in IE but why its not working in FF(all versions). Any assistance would be appreciated.

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • Object of type 'customObject' cannot be converted to type 'customObject'.

    - by Phani Kumar PV
    i am receiving the follwing error when i am invoking a custom object "Object of type 'customObject' cannot be converted to type 'customObject'." Following is the scenario when i am getting the error i am invoking a method in a dll dynamically. Load an assembly CreateInstance.... calling MethodInfo.Invoke() passing int, string as a parameter for my method is working fine = No exceptions are thrown. But if I try and pass a one of my own custom class objects as a parameter, then I get an ArgumentException exception, and it is not either an ArgumentOutOfRangeException or ArgumentNullException. "Object of type 'customObject' cannot be converted to type 'customObject'." I am doing this in a web application. The class file containing the method is in a different proj . also the custom object is a sepearte class in the same file. there is no such thing called a static aseembly in my code. I am trying to invoke a webmethod dynamically. this webmethod is having the customObject type as an input parameter. So when i invoke the webmethod i am dynamically creating the proxy assembly and all. From the same assembly i am trying to create an instance of the cusotm object assinging the values to its properties and then passing this object as a parameter and invoking the method. everything is dynamic and nothing is created static.. :( add reference is not used. Following is a sample code i tried to create it public static object CallWebService(string webServiceAsmxUrl, string serviceName, string methodName, object[] args) { System.Net.WebClient client = new System.Net.WebClient(); //-Connect To the web service using (System.IO.Stream stream = client.OpenRead(webServiceAsmxUrl + "?wsdl")) { //--Now read the WSDL file describing a service. ServiceDescription description = ServiceDescription.Read(stream); ///// LOAD THE DOM ///////// //--Initialize a service description importer. ServiceDescriptionImporter importer = new ServiceDescriptionImporter(); importer.ProtocolName = "Soap12"; // Use SOAP 1.2. importer.AddServiceDescription(description, null, null); //--Generate a proxy client. importer.Style = ServiceDescriptionImportStyle.Client; //--Generate properties to represent primitive values. importer.CodeGenerationOptions = System.Xml.Serialization.CodeGenerationOptions.GenerateProperties; //--Initialize a Code-DOM tree into which we will import the service. CodeNamespace nmspace = new CodeNamespace(); CodeCompileUnit unit1 = new CodeCompileUnit(); unit1.Namespaces.Add(nmspace); //--Import the service into the Code-DOM tree. This creates proxy code //--that uses the service. ServiceDescriptionImportWarnings warning = importer.Import(nmspace, unit1); if (warning == 0) //--If zero then we are good to go { //--Generate the proxy code CodeDomProvider provider1 = CodeDomProvider.CreateProvider("CSharp"); //--Compile the assembly proxy with the appropriate references string[] assemblyReferences = new string[5] { "System.dll", "System.Web.Services.dll", "System.Web.dll", "System.Xml.dll", "System.Data.dll" }; CompilerParameters parms = new CompilerParameters(assemblyReferences); CompilerResults results = provider1.CompileAssemblyFromDom(parms, unit1); //-Check For Errors if (results.Errors.Count > 0) { StringBuilder sb = new StringBuilder(); foreach (CompilerError oops in results.Errors) { sb.AppendLine("========Compiler error============"); sb.AppendLine(oops.ErrorText); } throw new System.ApplicationException("Compile Error Occured calling webservice. " + sb.ToString()); } //--Finally, Invoke the web service method Type foundType = null; Type[] types = results.CompiledAssembly.GetTypes(); foreach (Type type in types) { if (type.BaseType == typeof(System.Web.Services.Protocols.SoapHttpClientProtocol)) { Console.WriteLine(type.ToString()); foundType = type; } } object wsvcClass = results.CompiledAssembly.CreateInstance(foundType.ToString()); MethodInfo mi = wsvcClass.GetType().GetMethod(methodName); return mi.Invoke(wsvcClass, args); } else { return null; } } } I cant find anything static being done by me. any help is greatly appreciated. Regards, Phani Kumar PV

    Read the article

  • How to pre-load all deployed assemblies for an AppDomain

    - by Andras Zoltan
    Given an App Domain, there are many different locations that Fusion (the .Net assembly loader) will probe for a given assembly. Obviously, we take this functionality for granted and, since the probing appears to be embedded within the .Net runtime (Assembly._nLoad internal method seems to be the entry-point when Reflect-Loading - and I assume that implicit loading is probably covered by the same underlying algorithm), as developers we don't seem to be able to gain access to those search paths. My problem is that I have a component that does a lot of dynamic type resolution, and which needs to be able to ensure that all user-deployed assemblies for a given AppDomain are pre-loaded before it starts its work. Yes, it slows down startup - but the benefits we get from this component totally outweight this. The basic loading algorithm I've already written is as follows. It deep-scans a set of folders for any .dll (.exes are being excluded at the moment), and uses Assembly.LoadFrom to load the dll if it's AssemblyName cannot be found in the set of assemblies already loaded into the AppDomain (this is implemented inefficiently, but it can be optimized later): void PreLoad(IEnumerable<string> paths) { foreach(path p in paths) { PreLoad(p); } } void PreLoad(string p) { //all try/catch blocks are elided for brevity string[] files = null; files = Directory.GetFiles(p, "*.dll", SearchOption.AllDirectories); AssemblyName a = null; foreach (var s in files) { a = AssemblyName.GetAssemblyName(s); if (!AppDomain.CurrentDomain.GetAssemblies().Any( assembly => AssemblyName.ReferenceMatchesDefinition( assembly.GetName(), a))) Assembly.LoadFrom(s); } } LoadFrom is used because I've found that using Load() can lead to duplicate assemblies being loaded by Fusion if, when it probes for it, it doesn't find one loaded from where it expects to find it. So, with this in place, all I now have to do is to get a list in precedence order (highest to lowest) of the search paths that Fusion is going to be using when it searches for an assembly. Then I can simply iterate through them. The GAC is irrelevant for this, and I'm not interested in any environment-driven fixed paths that Fusion might use - only those paths that can be gleaned from the AppDomain which contain assemblies expressly deployed for the app. My first iteration of this simply used AppDomain.BaseDirectory. This works for services, form apps and console apps. It doesn't work for an Asp.Net website, however, since there are at least two main locations - the AppDomain.DynamicDirectory (where Asp.Net places it's dynamically generated page classes and any assemblies that the Aspx page code references), and then the site's Bin folder - which can be discovered from the AppDomain.SetupInformation.PrivateBinPath property. So I now have working code for the most basic types of apps now (Sql Server-hosted AppDomains are another story since the filesystem is virtualised) - but I came across an interesting issue a couple of days ago where this code simply doesn't work: the nUnit test runner. This uses both Shadow Copying (so my algorithm would need to be discovering and loading them from the shadow-copy drop folder, not from the bin folder) and it sets up the PrivateBinPath as being relative to the base directory. And of course there are loads of other hosting scenarios that I probably haven't considered; but which must be valid because otherwise Fusion would choke on loading the assemblies. I want to stop feeling around and introducing hack upon hack to accommodate these new scenarios as they crop up - what I want is, given an AppDomain and its setup information, the ability to produce this list of Folders that I should scan in order to pick up all the DLLs that are going to be loaded; regardless of how the AppDomain is setup. If Fusion can see them as all the same, then so should my code. Of course, I might have to alter the algorithm if .Net changes its internals - that's just a cross I'll have to bear. Equally, I'm happy to consider SQL Server and any other similar environments as edge-cases that remain unsupported for now. Any ideas!?

    Read the article

  • Modern Java alternatives

    - by Ralph
    I'm not sure if stackoverflow is the best forum for this discussion. I have been a Java developer for 14 years and have written an enterprise-level (~500,000 line) Swing application that uses most of the standard library APIs. Recently, I have become disappointed with the progress that the language has made to "modernize" itself, and am looking for an alternative for ongoing development. I have considered moving to the .NET platform, but I have issues with using something the only runs well in Windows (I know about Mono, but that is still far behind Microsoft). I also plan on buying a new Macbook Pro as soon as Apple releases their new rumored Arrandale-based machines and want to develop in an environment that will feel "at home" in Unix/Linux. I have considered using Python or Ruby, but the standard Java library is arguably the largest of any modern language. In JVM-based languages, I looked at Groovy, but am disappointed with its performance. Rumor has it that with the soon-to-be released JDK7, with its InvokeDynamic instruction, this will improve, but I don't know how much. Groovy is also not truly a functional language, although it provides closures and some of the "functional" features on collections. It does not embrace immutability. I have narrowed my search down to two JVM-based alternatives: Scala and Clojure. Each has its strengths and weaknesses. I am looking for the stackoverflow readerships' opinions. I am not an expert at either of these languages; I have read 2 1/2 books on Scala and am currently reading Stu Halloway's book on Clojure. Scala is strongly statically typed. I know the dynamic language folks claim that static typing is a crutch for not doing unit testing, but it does provide a mechanism for compile-time location of a whole class of errors. Scala is more concise than Java, but not as much as Clojure. Scala's inter-operation with Java seems to be better than Clojure's, in that most Java operations are easier to do in Scala than in Clojure. For example, I can find no way in Clojure to create a non-static initialization block in a class derived from a Java superclass. For example, I like the Apache commons CLI library for command line argument parsing. In Java and Scala, I can create a new Options object and add Option items to it in an initialization block as follows (Java code): final Options options = new Options() { { addOption(new Option("?", "help", false, "Show this usage information"); // other options } }; I can't figure out how to the same thing in Clojure (except by using (doit...)), although that may reflect my lack of knowledge of the language. Clojure's collections are optimized for immutability. They rarely require copy-on-write semantics. I don't know if Scala's immutable collections are implemented using similar algorithms, but Rich Hickey (Clojure's inventor) goes out of his way to explain how that language's data structures are efficient. Clojure was designed from the beginning for concurrency (as was Scala) and with modern multi-core processors, concurrency takes on more importance, but I occasionally need to write simple non-concurrent utilities, and Scala code probably runs a little faster for these applications since it discourages, but does not prohibit, "simple" mutability. One could argue that one-off utilities do not have to be super-fast, but sometimes they do tasks that take hours or days to complete. I know that there is no right answer to this "question", but I thought I would open it up for discussion. If anyone has a suggestion for another JVM-based language that can be used for enterprise level development, please list it. Also, it is not my intent to start a flame war. Thanks, Ralph

    Read the article

  • EXC_BAD_ACCESS in CFAttributedStringSetAttribute and NSNumber?

    - by RichardR
    Hi all, I am getting an infuriating EXC_BAD_ACCESS error in an objective c app I am working on. Any help you could offer would be much appreciated. I have tried the normal debug methods for this error (turning on NSZombieEnabled, checking retain/release/autorelease to make sure I'm not trying to access a deallocated object, etc.) and it hasn't seemed to help. Basically, the error always occurs in this function: ` void op_TJ(CGPDFScannerRef scanner, void *info) { PDFPage *self = info; CGPDFArrayRef array; NSMutableString *tempString = [NSMutableString stringWithCapacity:1]; NSMutableArray *kernArray = [[NSMutableArray alloc] initWithCapacity:1]; if(!CGPDFScannerPopArray(scanner, &array)) { [kernArray release]; return; } for(size_t n = 0; n < CGPDFArrayGetCount(array); n += 2) { if(n >= CGPDFArrayGetCount(array)) continue; CGPDFStringRef pdfString; // if we get a PDF string if (CGPDFArrayGetString(array, n, &pdfString)) { //get the actual string const unsigned char *charstring = CGPDFStringGetBytePtr(pdfString); //add this string to our temp string [tempString appendString:[NSString stringWithCString:(const char*)charstring encoding:[self pageEncoding]]]; //NSLog(@"string: %@", tempString); //get the space after this string CGPDFReal r = 0; if (n+1 < CGPDFArrayGetCount(array)) { CGPDFArrayGetNumber(array, n+1, &r); // multiply by the font size CGFloat k = r; k = -k/1000 * self.tmatrix.a * self.fontSize; CGFloat kKern = self.kern * self.tmatrix.a; k = k + kKern; // add the location and kern to the array NSNumber *tempKern = [NSNumber numberWithFloat:k]; NSLog(@"tempKern address: %p", tempKern); [kernArray addObject:[NSArray arrayWithObjects:[NSNumber numberWithInt:[tempString length] - 1], tempKern, nil]]; } } } // create an attribute string CFMutableAttributedStringRef attString = CFAttributedStringCreateMutable(kCFAllocatorDefault, 10); CFAttributedStringReplaceString(attString, CFRangeMake(0, 0), (CFStringRef)tempString); //apply overall kerning NSNumber *tkern = [NSNumber numberWithFloat:self.kern * self.tmatrix.a * self.fontSize]; CFAttributedStringSetAttribute(attString, CFRangeMake(0, CFAttributedStringGetLength(attString)), kCTKernAttributeName, (CFNumberRef)tkern); //apply individual kern attributes for (NSArray *kernLoc in kernArray) { NSLog(@"kern location: %i, %i", [[kernLoc objectAtIndex:0] intValue],[[kernLoc objectAtIndex:1] floatValue]); CFAttributedStringSetAttribute(attString, CFRangeMake([[kernLoc objectAtIndex:0] intValue], 1), kCTKernAttributeName, (CFNumberRef)[kernLoc objectAtIndex:1]); } CFAttributedStringReplaceAttributedString([self cfAttString], CFRangeMake(CFAttributedStringGetLength([self cfAttString]), 0), attString); //release CFRelease(attString); [kernArray release]; } ` The program always crashes because of line CFAttributedStringSetAttribute(attString, CFRangeMake([[kernLoc objectAtIndex:0] intValue], 1), kCTKernAttributeName, (CFNumberRef)[kernLoc objectAtIndex:1]) And it seems to depend on a few things: if [kernLoc objectAtIndex:1] refers to an [NSNumber numberWithFloat:k] where k = 0 (in other words, if k = 0 above where I populate kernArray) then the program crashes almost immediately If I comment out the line k = k + kKern, it takes longer for the program to crash, but does eventually (why would the crash depend on this value?) If I change the length of CFRangeMake from 1 to 0, it takes a lot longer for the program to crash, but still eventually does. (I don't think I am trying to access beyond the bounds of attString, but am I missing something?) When it crashes, I get something similar to: #0 0x942c7ed7 in objc_msgSend () #1 0x00000013 in ?? () #2 0x0285b827 in CFAttributedStringSetAttribute () #3 0x0000568f in op_TJ (scanner=0x472a590, info=0x4a32320) at /Users/Richard/Desktop/AppTest/PDFHighlight 2/PDFScannerOperators.m:251 Any ideas? It seems like somewhere along the way I am overwriting memory or trying to access memory that has been changed, but I have no idea. If there's anymore information I can provide, please let me know. Thanks, Richard

    Read the article

  • AT91SAM7X512's SPI peripheral gets disabled on write to SPI_TDR

    - by Dor
    My AT91SAM7X512's SPI peripheral gets disabled on the X time (X varies) that I write to SPI_TDR. As a result, the processor hangs on the while loop that checks the TDRE flag in SPI_SR. This while loop is located in the function SPI_Write() that belongs to the software package/library provided by ATMEL. The problem occurs arbitrarily - sometimes everything works OK and sometimes it fails on repeated attempts (attemp = downloading the same binary to the MCU and running the program). Configurations are (defined in the order of writing): SPI_MR: MSTR = 1 PS = 0 PCSDEC = 0 PCS = 0111 DLYBCS = 0 SPI_CSR[3]: CPOL = 0 NCPHA = 1 CSAAT = 0 BITS = 0000 SCBR = 20 DLYBS = 0 DLYBCT = 0 SPI_CR: SPIEN = 1 After setting the configurations, the code verifies that the SPI is enabled, by checking the SPIENS flag. I perform a transmission of bytes as follows: const short int dataSize = 5; // Filling array with random data unsigned char data[dataSize] = {0xA5, 0x34, 0x12, 0x00, 0xFF}; short int i = 0; volatile unsigned short dummyRead; SetCS3(); // NPCS3 == PIOA15 while(i-- < dataSize) { mySPI_Write(data[i]); while((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); dummyRead = SPI_Read(); // SPI_Read() from Atmel's library } ClearCS3(); /**********************************/ void mySPI_Write(unsigned char data) { while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); AT91C_BASE_SPI0->SPI_TDR = data; while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TDRE) == 0); // <-- This is where // the processor hangs, because that the SPI peripheral is disabled // (SPIENS equals 0), which makes TDRE equal to 0 forever. } Questions: What's causing the SPI peripheral to become disabled on the write to SPI_TDR? Should I un-comment the line in SPI_Write() that reads the SPI_RDR register? Means, the 4th line in the following code: (The 4th line is originally marked as a comment) void SPI_Write(AT91S_SPI *spi, unsigned int npcs, unsigned short data) { // Discard contents of RDR register //volatile unsigned int discard = spi->SPI_RDR; /* Send data */ while ((spi->SPI_SR & AT91C_SPI_TXEMPTY) == 0); spi->SPI_TDR = data | SPI_PCS(npcs); while ((spi->SPI_SR & AT91C_SPI_TDRE) == 0); } Is there something wrong with the code above that transmits 5 bytes of data? Please note: The NPCS line num. 3 is a GPIO line (means, in PIO mode), and is not controlled by the SPI controller. I'm controlling this line by myself in the code, by de/asserting the ChipSelect#3 (NPCS3) pin when needed. The reason that I'm doing so is because that problems occurred while trying to let the SPI controller to control this pin. I didn't reset the SPI peripheral twice, because that the errata tells to reset it twice only if I perform a reset - which I don't do. Quoting the errata: If a software reset (SWRST in the SPI Control Register) is performed, the SPI may not work properly (the clock is enabled before the chip select.) Problem Fix/Workaround The SPI Control Register field, SWRST (Software Reset) needs to be written twice to be cor- rectly set. I noticed that sometimes, if I put a delay before the write to the SPI_TDR register (in SPI_Write()), then the code works perfectly and the communications succeeds. Useful links: AT91SAM7X Series Preliminary.pdf ATMEL software package/library spi.c from Atmel's library spi.h from Atmel's library An example of initializing the SPI and performing a transfer of 5 bytes is highly appreciated and helpful.

    Read the article

  • Mobile App Data Syncronization

    - by Matt Rogish
    Let's say I have a mobile app that uses HTML5 SQLite DB (and/or the HTML5 key-value store). Assets (media files, PDFs, etc.) are stored locally on the mobile device. Luckily enough, the mobile device is a read-only copy of the "centralized" storage, so the mobile device won't have to propagate changes upstream. However, as the server changes assets (creates new ones, modifies existing, deletes old ones) I need to propagate those changes back to the mobile app. Assume that server changes are grouped into changesets (version number n) that contain some information (added element XYZ, deleted id = 45, etc.) and that the mobile device has limited CPU/bandwidth, so most of the processing has to take place on the server. I can think of a couple of methods to do this. All have trade-offs and at this point, I'm unsure which is the right course of action... Method 1: For change set n, store the "diff" of the current n and previous n-1. When a client with version y asks if there have been any changes, send the change sets from version y up to the current version. e.g. added item 334, contents: xxx. Deleted picture 44. Deleted PDF 11. Changed 33. added picture 99. Characteristics: Diffs take up space, although in theory would be kept small. However, all diffs must be kept around indefinitely (should a v1 app have not been updated for a year, must apply v2..v100). High latency devices (mobile apps) will incur a penalty to send lots of small files (assume cannot be zipped or tarr'd up into one file) Very few server CPU resources required, as all it does is send the client a list of files "Dumb" - if I change an item in change set 3, and change it to something else in 4, the client is going to perform both actions, even though #3 is rendered moot by #4. Or, if an asset is added in #4 and removed in #5 - the client will download a file just to delete it later. Method 2: Very similar to method 1 except on the server, do some sort of a diff between the change sets represented by the app version and server version. Package that up and send that single change set to the client. Characteristics: Client-efficient: The client only has to process one file, duplicate or irrelevant changes are stripped out. Server CPU/space intensive. The change sets must be diff'd and then written out to a file that is then sent to the client. Makes diff server scalability an issue. Possibly ways to cache the results and re-use them, but in the wild there's likely to be a lot of different versions so the diff re-use has a limit Diff algorithm is complicated. The change sets must be structured in such a way that an efficient and effective diff can be performed. Method 3: Instead of keeping diffs, write out the entire versioned asset collection to a mobile-database import file. When client requests an update, send the entire database to client and have them update their assets appropriately. Characteristics: Conceptually simple -- easy to develop and deploy Very inefficient as the client database is restored every update. If only one new thing was added, the whole database is refreshed. Server space and CPU efficient. Only the latest version DB needs kept around and the server just throws the file to the client. Others?? Thoughts? Thanks!!

    Read the article

  • Why C# doesn't implement indexed properties ?

    - by Thomas Levesque
    I know, I know... Eric Lippert's answer to this kind of question is usually something like "because it wasn't worth the cost of designing, implementing, testing and documenting it". But still, I'd like a better explanation... I was reading this blog post about new C# 4 features, and in the section about COM Interop, the following part caught my attention : By the way, this code uses one more new feature: indexed properties (take a closer look at those square brackets after Range.) But this feature is available only for COM interop; you cannot create your own indexed properties in C# 4.0. OK, but why ? I already knew and regretted that it wasn't possible to create indexed properties in C#, but this sentence made me think again about it. I can see several good reasons to implement it : the CLR supports it (for instance, PropertyInfo.GetValue has an index parameter), so it's a pity we can't take advantage of it in C# it is supported for COM interop, as shown in the article (using dynamic dispatch) it is implemented in VB.NET it is already possible to create indexers, i.e. to apply an index to the object itself, so it would probably be no big deal to extend the idea to properties, keeping the same syntax and just replacing this with a property name It would allow to write that kind of things : public class Foo { private string[] _values = new string[3]; public string Values[int index] { get { return _values[index]; } set { _values[index] = value; } } } Currently the only workaround that I know is to create an inner class (ValuesCollection for instance) that implements an indexer, and change the Values property so that it returns an instance of that inner class. This is very easy to do, but annoying... So perhaps the compiler could do it for us ! An option would be to generate an inner class that implements the indexer, and expose it through a public generic interface : // interface defined in the namespace System public interface IIndexer<TIndex, TValue> { TValue this[TIndex index] { get; set; } } public class Foo { private string[] _values = new string[3]; private class <>c__DisplayClass1 : IIndexer<int, string> { private Foo _foo; public <>c__DisplayClass1(Foo foo) { _foo = foo; } public string this[int index] { get { return _foo._values[index]; } set { _foo._values[index] = value; } } } private IIndexer<int, string> <>f__valuesIndexer; public IIndexer<int, string> Values { get { if (<>f__valuesIndexer == null) <>f__valuesIndexer = new <>c__DisplayClass1(this); return <>f__valuesIndexer; } } } But of course, in that case the property would actually return a IIndexer<int, string>, and wouldn't really be an indexed property... It would be better to generate a real CLR indexed property. What do you think ? Would you like to see this feature in C# ? If not, why ?

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >