Search Results

Search found 871 results on 35 pages for 'lowe simon'.

Page 9/35 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • Best Resources for learning SQL? [closed]

    - by Simon
    Possible Duplicate: Good Books and videos for absolute beginner to SQL I have landed a role as a product engineer for a web based product. A big part of the product is allowing its users the ability to create queries with SQL to pull in business information from their back end databases. I know the very basics of SQL and need to spend some time getting a better grasp on SQL. I have the tutorial from w3schools on my ToDo list, but was hoping to get some answers that point me to good resources for learning SQL. I have no preference - I can buy a book (SQL For Dummies?), or online resources, online videos, audio, etc.

    Read the article

  • Simpler Times

    - by Simon Moon
    Does anyone else out there long for the simpler days where you needed to move a jumper in the jumper block to set your modem card to use IRQ7 so it would not conflict with the interrupts used by other boards in your PC and your modem card came with a 78 page manual telling you everything you would need to know to write your own driver for the board including a full schematic along with the board layout showing every chip, capacitor, and resistor?  Ahhhhh, the simplicity!I am wrestling with UserPnp issues for a USB software licensing dongle that is needed by some third party software in one of our production applications. Of course, every machine in production is virtual, so it could be anything in the chain of the software application library to the device driver running on the VM to the configuration of the simulated USB port to the implementation of the USB connection and transport in the virtual host to the physical electrical connections in the USB port on the hypervisor.If only there were the virtual analog to a set of needle-nose pliers to move a virtual jumper.Come to think of it, I always used to drop those damn things such that they would land in an irretrievable position under the motherboard anyway.

    Read the article

  • How should I track multi-valued page attributes (e.g. tags) using custom variables?

    - by Simon
    Our pages can each have many tags, e.g 'football', 'sms', 'nsfw', etc.. wich we would like to track in google analytics. We're already tracking things like category using google analytics custom variables. We've used three of the five available slots so far. How can we track tags the same way? If we just mush them all together - e.g. 'football, sms, nsfw' then can we track the ones that are tagged 'football'? What's the right way to track multi-valued page attributes using custom variables?

    Read the article

  • How well does Intel 3000 HD work on Ubuntu?

    - by Simon
    Right now i have notebook with Nvidia 8400M GS (I know, it's not good card) and it's impossible to work normally when i'll plugin external monitor (1920x1080). Windows 7 can deal with it without problems (1440x900 on notebook + 1920x1080 external). On Ubuntu i have to choose one screen and turn off the second one. Even with only one screen Ubuntu (Unity or even Gnome3) sometimes hangs for a while, I've not found solution for this yet, but nevermind, it's probably because of my card or/and nvidia's drivers. I'm going to buy new PC, but for now only with integrated Intel 3000HD, and my question is: Should i expect similar problems with this card? Here i've found link to Intel's webpage about drivers - "only community develop them", and i'm a bit concerned. I'll use then only one monitor (the bigger one), but how well does those driver work? Are there any performance tests?

    Read the article

  • How can I open binary image files? (.img)

    - by Simon Cahill
    I'm a Windows/Mac/Ubuntu and Androoid user, so I know what I'm talking about, when I say: How do I open binary image files? (.img) They just won't open, on any OS... I'm an Android dev... I'm currently working on a ROM, (I also program, using Windows) but I need to extract files, from .img files. I've converted them to .ext4.img but they just aren't recognized by Linux (Definitly not by Android), by Mac OS or Windows. In other words, I can't open, extract or mount them. Can anyone help me? I'm kinda confused...

    Read the article

  • I think "/lib/modules/$(uname -r)/build" points to incorrect folder

    - by Simón
    I compile/create my own deb packages of kernel with: make-kpkg --rootcmd fakeroot --initrd --append-to-version=$version --revision=1 kernel_image kernel_headers But when I install both packages, in /lib/modules/(*name_kernel_compiled*) it creates two links, sources and build, pointing to folder with sources, from I've compiled. sources link is correct but build should point to /usr/src/linux-(version kernel), don't you think?

    Read the article

  • Can I show a table of one custom variable against another?

    - by Simon
    We have a number of custom variables set up in google analytics. We'd like to show a table of event occurrences broken down by two custom variables, e.g. if variable one can be A, B, or C and variable two can be J, K or L: Events | A | B | C | -------+-----+-----+-----+ J | 345 | 65 | 12 | K | 234 | 43 | 7 | L | 123 | 21 | 4 | -------+-----+-----+-----+ How do I get the information in that format?

    Read the article

  • Use of list-unsubscribe to improve inbox delivery

    - by Jeffrey Simon
    To overcome email being classified as spam by Gmail, Google recommends a number of steps, which we have implemented (namely SPF, DKIM, and Precedence: bulk). One additional measure they recommend at https://support.google.com/mail/bin/answer.py?hl=en&answer=81126#authentication reads as follows: Because Gmail can help users automatically unsubscribe from your email, we strongly recommend the following: Provide a 'List-Unsubscribe' header which points to an email address where the user can unsubscribe easily from future mailings (Note: This is not a substitute method for unsubscribing). Documentation for List-Unsubscribe is found at http://www.list-unsubscribe.com/. From this documentation I expect a button to be provided by a supported mail client. I have tested the 'List-Unsubscribe' header and it does not appear to provide the button. I have tested in both Gmail and OS X Mail. I tested with an http address and with both an email address and an http address. The format of the header is as follows: List-Unsubscribe: <mailto:[email protected]>, <http://domain.com/member/unsubscribe/[email protected]?id=12345N> No button appears in any test. My questions: How widely is List-Unsubscribe supported? Should a button be appearing somewhere, or does something else have to be present? I have seen a comment that even if the button is not present, services like Gmail, Yahoo, Hotmail/Windows Live would give higher regard to email having the header. Thus it might be worthwhile for this aspect alone. Please note that our standard email footer already contacts instructions and a link to allow unsubscribing from our email. Finally, is it worth while to implement this header? (That is, any downsides?)

    Read the article

  • Why do programming languages allow shadowing/hiding of variables and functions?

    - by Simon
    Many of the most popular programming languges (such as C++, Java, Python etc.) have the concept of hiding / shadowing of variables or functions. When I've encountered hiding or shadowing they have been the cause of hard to find bugs and I've never seen a case where I found it necessary to use these features of the languages. To me it would seem better to disallow hiding and shadowing. Does anybody know of a good use of these concepts?

    Read the article

  • What method do I use to manage an app-specific background process?

    - by Simon Dubois
    I am developing an application with different behavior depending on the arguments : "-config" starts a Gtk window to change options, start and close the daemon. "-daemon" starts a background process that does something every X minutes. I already know how to use fork/system/exec etc... But I would like to know the main logic of such application to : restart or refresh the daemon when configuration change. keep only one instance of the daemon. I have read that killing the daemon to restart it is not a clean way to do. How other applications do ? (ubuntuone, weather forecast, rss feed working with notification area) Thanks for your help. PS : I don't want to create a system-wide daemon, just a user application with a background process.

    Read the article

  • .NET vs Windows 8: Rematch!

    - by Simon Cooper
    So, although you will be able to use your existing .NET skills to develop Metro apps, it turns out Microsoft are limiting Visual Studio 2011 Express to Metro-only. From the Express website: Visual Studio 11 Express for Windows 8 provides tools for Metro style app development. To create desktop apps, you need to use Visual Studio 11 Professional, or higher. Oh dear. To develop any sort of non-Metro application, you will need to pay for at least VS Professional. I suspect Microsoft (or at least, certain groups within Microsoft) have a very explicit strategy in mind. By making VS Express Metro-only, developers who don't want to pay for Professional will be forced to make their simple one-shot or open-source application in Metro. This increases the number of applications available for Windows 8 and Windows mobile devices, which in turn make those platforms more attractive for consumers. When you use the free VS 11 Express, instead of paying Microsoft, you provide them a service by making applications for Metro, which in turn makes Microsoft's mobile offering more attractive to consumers, increasing their market share. Of course, it remains to be seen if developers forced to jump onto the Metro bandwagon will simply jump ship to Android or iOS instead. At least, that's what I think is going on. With Microsoft, who really knows?

    Read the article

  • "Best fit" to avoid reuse of object instances in a collection

    - by Simon
    Imagine I have a collection of object instances which represent activities for a user to undertake. Dependent on user attributes, I have to randomly select instances to present activities to the user. For some users, I need to present more activities to them than there are available activities in which case, I want to use the following algorithm. If all available activities have already been presented to the user, then re-select a "used" activity, selecting the earliest presented activity ordered by frequency of use. In other words, try to reduce repetition and where repetition is unavoidable, use the instances which have been repeated less often and were presented furthest back in time. Before I go on to code that algorithm, I wondered if there is some existing pattern I can re-use? [EDIT] "Furthest back in time" is not relevant as I will pass the algorithm an ordered collection of used instances where the first entry is the first presented.

    Read the article

  • what is the purpose of arrows?

    - by Simon
    I am learning functionnal programming with Haskell, and I try to grab concepts by first understanding why do I need them. I would like to know the goal of arrows in functional programming languages. What problem do they solve? I checked http://en.wikibooks.org/wiki/Haskell/Understanding_arrows and http://www.cse.chalmers.se/~rjmh/afp-arrows.pdf. All I understand is that they are used to describe graphs for computations, and that they allow easier point free style coding. The article assume that point free style is generally easier to understand and to write. This seems quite subjective to me. In another article (http://en.wikibooks.org/wiki/Haskell/StephensArrowTutorial#Hangman:_Main_program), a hangman game is implemented, but I cannot see how arrows makes this implementation natural. I could find a lot of papers describing the concept, but nothing about the motivation. What I am missing?

    Read the article

  • File Permissions

    - by Simon Bateman
    Some years/versions ago I found a script that took the Directory group/permissions of the current folder for NEW files Sub-Folders Currently when I import Photographs into 12.04 using Shotwell, I have to use terminal to issue: chgrp photography *.JPG. This enables other ubuntu members of 'photography' group to add these files to their version of Shotwell. I find modifying the Folder Properties/Permissions will NOT set the Group on 'Apply Permissions to Enclosed Files'

    Read the article

  • Pantheon Not Completely Installed?

    - by Simon
    I have just installed the pantheon shell today, and I have not found any help with this yet. I am not just a random noob, i use a bunch of other shells and i also have a few development applications on my copy of ubuntu. But ever since ive opened up pantheon, i cannot find settings or this thing called the launchpad. (if the launchpad is the app drawer up in the upper left corner, or if its the dock. i have it, if thats not what launchpad is, then i cant access it.) I can only change my wallpaper by going back to unity, gnome, or KDE. There is a system settings in the power menu (upper right), but it only has Language Support, Ubuntu One, Additional Drivers, And Printing. I can still access the full ubuntu settings in GNOME of Unity. But thats it. I installed in the terminal, uninstalled, and reinstalled using the software center. Please Help Me If Any Of You Can! Thanks!!!

    Read the article

  • Wifi problems after upgrading to 13.10

    - by Simon
    I just upgraded to Ubuntu 13.10, but since the upgrade I don't have internet access via wifi anymore. I can: See networks Connect to a network Ping myself (localhost, 192.168.0.103) I can't: Ping others (including other devices on the same wireless network, including the gateway/router) Resolve hosts Access any other external resource, whether on my own network or on the internet Using Wireshark, I noticed my computer is continuously sending ARP-requests like "Who has 192.168.0.1 [which is the gateway]? Tell 192.168.0.103". It doesn't get any replies though. When I ping another IP-address for which it knows the mac-address (from cache), it turns out a packet loss of 90% occurs, and even if a packet manages to arrive it takes around 3000ms. The output of route -n is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 9 0 0 eth1 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 Before upgrading, wifi worked fine. Using other devices, wifi still works fine.Resetting the router didn't help. Ethernet still works after upgrading. Any suggestions? Update: I'm using the wl driver. Here's the relevant output of some commands: lspci | grep Wireless 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01) cat /etc/modprobe.d/blacklist.conf [...] blacklist mac80211 blacklist brcm80211 blacklist cfg80211 blacklist lib80211_crypt_tkip blacklist lib80211 blacklist b43 cat /etc/rc.local sudo modprobe -r lib80211 sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_wep.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_tkip.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_ccmp.ko sudo modprobe wl exit 0 The last lines are probably how I got wireless working after the previous upgrade (wireless has been a problem after each upgrade). Update 2: added information about the exact hardware below. The hardware is an integrated device, so I ran lspci -nn | grep -i network. The output is: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01)

    Read the article

  • Color distortion with Intel HD4000 over HDMI output - VGA is fine

    - by Simon Möller
    I set up a new system with i5 3570k gigabyte z77 d3h using integrated graphics (HD4000) and I've installed the 64bit Desktop version of 12.04. Connected to my Acer Screen via HDMI, the colors are horribly distorted. I think the biggest issue is that black appears as greenish. Over VGA everything is fine. Interestingly, I once observed that connecting both, the VGA and the HDMI cable at the same time, solved the issue. Ubuntu thought 2 screens were connected, I mirrored the image, and over HDMI the colors looked fine. However, after a reboot I am now unable to reproduce this behavior. I read frequently, that Intel hardware should be supported out of the box by Ubuntu but it doesn't seem to be the case here. Should I upgrade my Kernel? If so, which version would you recommend? Thank you for your answers!

    Read the article

  • loss of sound in ubuntu 12.04

    - by Leo Simon
    I'm running Linux E6520 3.2.0-56-generic #86-Ubuntu SMP Wed Oct 23 09:20:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux on a Dell Latitude E6530. (This is a new machine; have run the same version of linux on an older machine for a year, without this happening.) I've been losing sound regularly, though have not been able to isolate the trigger for this. I've scoured the web on this subject, in particular https://help.ubuntu.com/community/SoundTroubleshootingProcedure and Audio stopped working suddenly in 12.04 Nothing from the first site seemed to work for me. From the second site, I learned enough to be able to fix the problem when it happens, but nothing on the web has helped me figure out why the problem is happening in the first place. Patching together stuff from the web, and with some blind luck, I've found that the following steps seem to restore sound pulseaudio --kill pulseaudio --start pavucontrol -> output devices Click on the "Mute audio" icon, which mutes audio Click on the "Mute audio" icon, which unmutes audio. This obviously doesn't make sense: audio wasn't muted in the first place, but somehow, magically, toggling mute audio off and on seems to reset something. Can anybody suggest from this information why sound would be disappearing in the first place (it seems as though something is getting muted at the system level, but I don't know what)? a simpler (command-line/script) way of restoring sound, in particular, is it possible to reset pavucontrol from the commandline? Some other pieces of information that may be of use: The problem is clearly happening at the system level, since I've set up a clean new user, and this user has the same problems that I do. So user fixes like deleting the .pulse directory aren't (and don't) help. Sound works fine in Windows (dual-boot) so it's not a hardware problem Any help/suggestions on this would be most appreciated.

    Read the article

  • How should I track multi-valued page attributes (e.g. tags) using custom variables?

    - by Simon
    Our pages can each have many tags, e.g 'football', 'sms', 'nsfw', etc.. wich we would like to track in google analytics. We're already tracking things like category using google analytics custom variables. We've used three of the five available slots so far. How can we track tags the same way? If we just mush them all together - e.g. 'football, sms, nsfw' then can we track the ones that are tagged 'football'? What's the right way to track multi-valued page attributes using custom variables?

    Read the article

  • Design-time failure: WPF, Usercontrols and Namespaces

    - by Simon Woods
    Hi I have a very simple WPF project comprising a Window and Usercontrol. I'm very much in a learning phase. It works fine when I run it. However, I am unable to see the form in design time. The problem, I believe is something to do with namespaces, but I don't understand where. It may well be a simple error Main Window XML <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:views="clr-namespace:UserLogin" x:Class="UserLogin.MainView" x:Name="MainViewWindow" mc:Ignorable="d" Title="Login" Height="141" Width="347" > <Grid> <views:LoginView /> </Grid> </Window> Main Window CodeBehind Imports Microsoft.VisualBasic Imports System Imports System.Windows Imports UserLogin Namespace UserLogin Partial Public Class MainView Inherits System.Windows.Window Public Sub New() InitializeComponent() End Sub End Class End Namespace Usercontrol XAML <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" x:Class="UserLogin.LoginView" x:Name="LoginViewControl" mc:Ignorable="d" d:DesignHeight="96" d:DesignWidth="298"> <Grid Height="96" Width="298"> <Button Command="{Binding OKCommand}" Height="21" Margin="0,0,90,16" Name="btnOK" VerticalAlignment="Bottom" HorizontalAlignment="Right" Width="76">OK</Button> <Button Command="{Binding CancelCommand}" Height="21" HorizontalAlignment="Right" Margin="0,0,9,16" Name="btnCancel" VerticalAlignment="Bottom" Width="75">Cancel</Button> <Label Height="23" HorizontalAlignment="Left" Margin="10,5,0,0" Name="Label1" VerticalAlignment="Top" Width="85">Name:</Label> <Label HorizontalAlignment="Left" Margin="10,32,0,0" Name="Label2" Width="85" Height="29" VerticalAlignment="Top">Password:</Label> <TextBox Margin="0,31,6,0" Name="txtPassword" Height="22" VerticalAlignment="Top" HorizontalAlignment="Right" Width="182" /> <ComboBox Height="22" Margin="110,6,6,0" Name="cboNames" VerticalAlignment="Top" /> </Grid> </UserControl> Usercontrol CodeBehind Imports Microsoft.VisualBasic Imports System Imports UserLogin Namespace UserLogin Partial Public Class LoginView Inherits System.Windows.Controls.UserControl Public Sub New() InitializeComponent() End Sub End Class End Namespace I think I'm missing something this namespace xmlns:views="clr-namespace:UserLogin" since intellisense doesn't give me the usercontrol declared within it in the XAML designer but rather reports the error "Unable to load the metadata for the assembly ... etc etc" Thx for any suggestions Simon

    Read the article

  • How to build Lucene / Solr from source code in windows environment in order to add patches

    - by Simon
    I have successfully implemented Apache’s Solr for free text searching a database driven web site build for windows platforms using Visual Studio in c#. I am trying to get a version Solr working with field collapsing (which is not in the release version). There are patches available from apache and discussions on the web of people successfully doing this for the version I am using but my problem is cannot get the build to work. I am a c# coder on windows platforms so java development is new to me. I understand I need to get the correct source code (and revision) from SVN, add the appropriate patches, then build the war file to deploy to my system. I cannot seem to get the source to build and produce the deployment code including jar (and subsequent war) files. My system is: Windows 7 Ultimate for development Visual Studio 2010 for c# / javascript development MyEclipse 8.6 / Eclipse 3.5 for the java build from source Subecplise 1.6x SVN plugin to get the source from apache’s SVN Apache Solr 1.4.1 So far I have: Found the right patches for the function I need: https://issues.apache.org/jira/browse/SOLR-236 Specifically I need to patch: field_collapsing_1.1.0.patch HTTPS //issues.apache.org/jira/secure/attachment/12357681/field_collapsing_1.1.0.patch and SOLR-236-1_4_1.patch HTTPS //issues.apache.org/jira/secure/attachment/12448216/SOLR-236-1_4_1.patch I downloaded the Lucene trunk version from the day before the patch was released (revision 958303 from 28/6/10) via subeclipse into a java package in myeclipse from: HTTPS //svn.apache.org/repos/asf/lucene/dev/trunk (Solr is the web implementation of Lucene and is in the subfolder solr/) I can apply patches to the solr directory once it has downloaded but the parent Lucene project doesn’t build the war files, copy the jar or other files into the bin folder (it stays empty). The build process starts, but doesn’t do anything apart from creating the folders bin and src. I am building the whole Lucene project, which contains Solr. I have tried building the source without patching and the same happens. If I copy out the Solr directory into a new project, it runs the build and copies all the related files, tests, etc but fails with 4,500 errors and does not produce the jar files or war file, which I assume is because it can’t find the Lucene trunk files which it depends on. I have two interrelated problems 1) I can't get the Lucene downloaded trunk to build 2) The jar, war and associated files are not created Can anyone help with what I am missing to build the war file? I have spent 2 days to get this far as the help online is extremely patchy and I can’t find a walk though tutorial on building a java war file from source in a windows environment. Any help will be much appreciated. Simon

    Read the article

  • Constructor Injection and when to use a Service Locator

    - by Simon
    I'm struggling to understand parts of StructureMap's usage. In particular, in the documentation a statement is made regarding a common anti-pattern, the use of StructureMap as a Service Locator only instead of constructor injection (code samples straight from Structuremap documentation): public ShippingScreenPresenter() { _service = ObjectFactory.GetInstance<IShippingService>(); _repository = ObjectFactory.GetInstance<IRepository>(); } instead of: public ShippingScreenPresenter(IShippingService service, IRepository repository) { _service = service; _repository = repository; } This is fine for a very short object graph, but when dealing with objects many levels deep, does this imply that you should pass down all the dependencies required by the deeper objects right from the top? Surely this breaks encapsulation and exposes too much information about the implementation of deeper objects. Let's say I'm using the Active Record pattern, so my record needs access to a data repository to be able to save and load itself. If this record is loaded inside an object, does that object call ObjectFactory.CreateInstance() and pass it into the active record's constructor? What if that object is inside another object. Does it take the IRepository in as its own parameter from further up? That would expose to the parent object the fact that we're access the data repository at this point, something the outer object probably shouldn't know. public class OuterClass { public OuterClass(IRepository repository) { // Why should I know that ThingThatNeedsRecord needs a repository? // that smells like exposed implementation to me, especially since // ThingThatNeedsRecord doesn't use the repo itself, but passes it // to the record. // Also where do I create repository? Have to instantiate it somewhere // up the chain of objects ThingThatNeedsRecord thing = new ThingThatNeedsRecord(repository); thing.GetAnswer("question"); } } public class ThingThatNeedsRecord { public ThingThatNeedsRecord(IRepository repository) { this.repository = repository; } public string GetAnswer(string someParam) { // create activeRecord(s) and process, returning some result // part of which contains: ActiveRecord record = new ActiveRecord(repository, key); } private IRepository repository; } public class ActiveRecord { public ActiveRecord(IRepository repository) { this.repository = repository; } public ActiveRecord(IRepository repository, int primaryKey); { this.repositry = repository; Load(primaryKey); } public void Save(); private void Load(int primaryKey) { this.primaryKey = primaryKey; // access the database via the repository and set someData } private IRepository repository; private int primaryKey; private string someData; } Any thoughts would be appreciated. Simon

    Read the article

  • How to decrypt a string in C# that was encrypted in Delphi

    - by Simon Linder
    Hi all, we have a project written in Delphi that we want to convert to C#. Problem is that we have some passwords and settings that are encrypted and written into the registry. When we need a specified password we get it from the registry and decrypt it so we can use it. For the conversion into C# we have to do it the same way so that the application can also be used by users that have the old version and want to upgrade it. Here is the code we use to encrypt/decrypt strings in Delphi: unit uCrypt; interface function EncryptString(strPlaintext, strPassword : String) : String; function DecryptString(strEncryptedText, strPassword : String) : String; implementation uses DCPcrypt2, DCPblockciphers, DCPdes, DCPmd5; const CRYPT_KEY = '1q2w3e4r5t6z7u8'; function EncryptString(strPlaintext) : String; var cipher : TDCP_3des; strEncryptedText : String; begin if strPlaintext <> '' then begin try cipher := TDCP_3des.Create(nil); try cipher.InitStr(CRYPT_KEY, TDCP_md5); strEncryptedText := cipher.EncryptString(strPlaintext); finally cipher.Free; end; except strEncryptedText := ''; end; end; Result := strEncryptedText; end; function DecryptString(strEncryptedText) : String; var cipher : TDCP_3des; strDecryptedText : String; begin if strEncryptedText <> '' then begin try cipher := TDCP_3des.Create(nil); try cipher.InitStr(CRYPT_KEY, TDCP_md5); strDecryptedText := cipher.DecryptString(strEncryptedText); finally cipher.Free; end; except strDecryptedText := ''; end; end; Result := strDecryptedText; end; end. So for example when we want to encrypt the string asdf1234 we get the result WcOb/iKo4g8=. We now want to decrypt that string in C#. Here is what we tried to do: public static void Main(string[] args) { string Encrypted = "WcOb/iKo4g8="; string Password = "1q2w3e4r5t6z7u8"; string DecryptedString = DecryptString(Encrypted, Password); } public static string DecryptString(string Message, string Passphrase) { byte[] Results; System.Text.UTF8Encoding UTF8 = new System.Text.UTF8Encoding(); // Step 1. We hash the passphrase using MD5 // We use the MD5 hash generator as the result is a 128 bit byte array // which is a valid length for the TripleDES encoder we use below MD5CryptoServiceProvider HashProvider = new MD5CryptoServiceProvider(); byte[] TDESKey = HashProvider.ComputeHash(UTF8.GetBytes(Passphrase)); // Step 2. Create a new TripleDESCryptoServiceProvider object TripleDESCryptoServiceProvider TDESAlgorithm = new TripleDESCryptoServiceProvider(); // Step 3. Setup the decoder TDESAlgorithm.Key = TDESKey; TDESAlgorithm.Mode = CipherMode.ECB; TDESAlgorithm.Padding = PaddingMode.None; // Step 4. Convert the input string to a byte[] byte[] DataToDecrypt = Convert.FromBase64String(Message); // Step 5. Attempt to decrypt the string try { ICryptoTransform Decryptor = TDESAlgorithm.CreateDecryptor(); Results = Decryptor.TransformFinalBlock(DataToDecrypt, 0, DataToDecrypt.Length); } finally { // Clear the TripleDes and Hashprovider services of any sensitive information TDESAlgorithm.Clear(); HashProvider.Clear(); } // Step 6. Return the decrypted string in UTF8 format return UTF8.GetString(Results); } Well the result differs from the expected result. After we call DecryptString() we expect to get asdf1234but we get something else. Does anyone have an idea of how to decrypt that correctly? Thanks in advance Simon

    Read the article

  • Super Computer Built from Raspberry Pi Boards and LEGO Bricks

    - by Jason Fitzpatrick
    It was only a matter of time before someone chained together dozens of Raspberry Pi boards into a serviceable super computer; read on to see how a team of Southampton scientists built a 64-core machine using them. Image courtesy of Simon Cox and the University of Southampton. From the University of South Hampton press release: Professor Cox comments: “As soon as we were able to source sufficient Raspberry Pi computers we wanted to see if it was possible to link them together into a supercomputer. We installed and built all of the necessary software on the Pi starting from a standard Debian Wheezy system image and we have published a guide so you can build your own supercomputer.” The racking was built using Lego with a design developed by Simon and James, who has also been testing the Raspberry Pi by programming it using free computer programming software Python and Scratch over the summer. The machine, named “Iridis-Pi” after the University’s Iridis supercomputer, runs off a single 13 Amp mains socket and uses MPI (Message Passing Interface) to communicate between nodes using Ethernet. The whole system cost under £2,500 (excluding switches) and has a total of 64 processors and 1Tb of memory (16Gb SD cards for each Raspberry Pi). Professor Cox uses the free plug-in ‘Python Tools for Visual Studio’ to develop code for the Raspberry Pi. How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >