Search Results

Search found 55281 results on 2212 pages for 'get set'.

Page 332/2212 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • What’s ‘default’ for?

    - by Strenium
    Sometimes there's a need to communicate explicitly that value variable is yet to be "initialized" or in other words - we’ve never changed it from its' default value. Perhaps "initialized" is not the right word since a value type will always have some sort of value (even a nullable one) but it's just that - how do we tell? Of course an 'int' would be 0, an 'enum' would the first defined value of a given enum and so on – we sure can make this kind of check "by hand" but eventually it would get a bit messy. There's a more elegant way with a use of little-known functionality of: 'default'Let’s just say we have a simple Enum: Simple Enum namespace xxx.Common.Domain{    public enum SimpleEnum    {        White = 1,         Black = 2,         Red = 3    }}   In case below we set the value of the enum to ‘White’ which happens to be a first and therefore default value for the enum. So the snippet below will set value of the ‘isDefault’ Boolean to ‘true’. 'True' Case SimpleEnum simpleEnum = SimpleEnum.White;bool isDefault; /* btw this one is 'false' by default */ isDefault = simpleEnum == default(SimpleEnum) ? true : false; /* default value 'white' */   Here we set the value to ‘Red’ and ‘default’ will tell us whether or not this the default value for this enum type. In this case: ‘false’. 'False' Case simpleEnum = SimpleEnum.Red; /* change from default */isDefault = simpleEnum == default(SimpleEnum) ? true : false; /* value is not default any longer */ Same 'default' functionality can also be applied to DateTimes, value types and other custom types as well. Sweet ‘n Short. Happy Coding!

    Read the article

  • How Do you Declare a Dependancy Property in VB.Net 3.0

    - by discwiz
    My company is stuck on .Net 3.0. The task I am trying to tackle is simple, I need to bind the IsChecked property of the CheckBoxResolvesCEDAR to the CompletesCEDARWork in my Audio class. The more I read about this it appears that I have to declare CompletesCEDARWork as dependancy propert, but I can not find a good example of how this is done. I found this example, but when I pasted into my code I get an "is not defined" error for GetValue and I have not successfully figure out what MyCode is supposed to represent. Any help/examples would be greatly appreciated. Thanks Public Shared ReadOnly IsSpinningProperty As DependencyProperty = DependencyProperty.Register("IsSpinning", GetType(Boolean), GetType(MyCode)) Public Property IsSpinning() As Boolean Get Return CBool(GetValue(IsSpinningProperty)) End Get Set(ByVal value As Boolean) SetValue(IsSpinningProperty, value) End Set End Property Here is my slimed down Audio Class as it stands now. Imports System.Xml Imports System Imports System.IO Imports System.Collections.ObjectModel Imports System.ComponentModel Public Class Audio Private mXMLString As String Private mTarpID As Integer Private mStartTime As Date Private mEndTime As Date Private mAudioArray As Byte() Private mFileXMLInfo As IO.FileInfo Private mFileXMLStream As IO.FileStream Private mFileAudioInfo As IO.FileInfo Private mDisplayText As String Private mCompletesCEDARWork As Boolean Private Property CompletesCEDARWork() As Boolean Get Return mCompletesCEDARWork End Get Set(ByVal value As Boolean) mCompletesCEDARWork = value End Set End Property And here is my XML Datatemplate where I set the binding. <DataTemplate x:Key="UploadLayout" DataType="Audio"> <Border BorderBrush="LightGray" CornerRadius="8" BorderThickness="1" Padding="10" Margin="0,3,0,0"> <StackPanel Orientation="Vertical"> <TextBlock Text="{Binding Path=DisplayText}"> </TextBlock> <StackPanel Orientation="Horizontal" VerticalAlignment="Center"> <TextBlock Text="TARP ID" VerticalAlignment="Center"/> <ComboBox x:Name="ListBoxTarpIDs" ItemsSource="{Binding Path=TarpIds}" SelectedValue="{Binding Path=TarpID}" BorderBrush="Transparent" Background="Transparent" > </ComboBox> </StackPanel> <CheckBox x:Name="CheckBoxResolvesCEDAR" Content="Resolves CEDAR Work" IsChecked="{Binding ElementName=Audio,Path=CompletesCEDARWork,Mode=TwoWay}"/> </StackPanel> </Border> </DataTemplate>

    Read the article

  • Single CAS web application in a cluster

    - by Dolf Dijkstra
    Recently a customer wanted to set up a cluster of CAS nodes to be used together with WebCenter Sites. In the process of setting this up they realized that they needed to create a web application per managed server. They did not want to have this management burden but would like to have one web application deployed to multiple nodes. The reason that there is a need for a unique application per node is that the web-application contains information that needs to be unique per node, the postfix for the ticket id.  My customer would like to externalize the node specific configuration to either a specific classpath per managed server or to system properties set at startup.It turns out that the postfix for ticket ids is managed through a property host.name and that this property can be externalized.The host.name property is used in: /webapps/cas/WEB-INF/spring-configuration/uniqueIdGenerators.xmlIt is set in /webapps/cas/WEB-INF/spring-configuration/propertyFileConfigurer.xmlin a PropertyPlaceholderConfigurer.The documentation for PropertyPlaceholderConfigurer:http://static.springsource.org/spring/docs/2.0.x/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.htmlThis indicates that the properties defined through the PropertyPlaceHolderConfigurer can be externalized.To enable this externalization you would need to change host.properties so it is generic for all the managed servers and thus can be reused for all the managed servers: host.name=${cluster.node.id}Next step is to change the startup scripts for the managed servers and add a system property for -Dcluster.node.id=<something unique and stable>.Viola, the postfix is externalized and the web application can be shared amongst the cluster nodes.

    Read the article

  • SharePoint 2007 Enabling Incoming Email Error

    - by Cherie Riesberg
    Symptom: When configuring incoming e-mail, the e-mails come through just fine if the server name is in the e-mail address: [email protected] but when you change it to a vanity name [email protected], the message is bounced back and you get this error: Delivery has failed to these recipients or distribution lists: [email protected] Your message wasn't delivered because of security policies. Microsoft Exchange will not try to redeliver this  message for you.    Please provide the following diagnostic text to your system administrator. The following organization rejected your message: servername01.fqdn.com.   Problem: The SharePoint server relay rejects the message because it doesn't recognize the name.  You have set it up in Exchange, but you need to set up an alias in the SMTP service on the SharePoint server;   Solution: Configure an Alias Domain An alias domain is an alias of the default domain. You can set up alias domains that use the same settings as the default domain. Messages that are received by the SMTP Service for an alias domain are placed in the Drop folder that is designated for the default domain. To configure an alias domain, follow these steps: Start IIS Manager or open the IIS snap-in. Expand Server_name, where Server_name is the name of the server, and then expand the SMTP virtual server that you want (for example, Default SMTP Virtual Server). Right-click Domains, point to New, and then click Domain. The New SMTP Domain Wizard starts. Click Alias, and then click Next. Type a name for the alias domain in the Name box, and then click Finish. Quit IIS Manager or close the IIS snap-in.

    Read the article

  • Stop Time Sync Between Host and Guest - VirtualBox

    - by KoopaTroopa
    So I've been googling and I've tried the following commands. I want to stop the virtual machine from having the correct time and just to stick with what time it last recorded. The host operating system is Ubuntu 12.04 and the guest is Windows XP. I've turned time syncing off in XP so it won't do that when I connect to the Internet. However it does appear to take the time from ubuntu and set it as it's own. VBoxManage setextradata XP11 “VBoxInternal/TM/WarpDrivePercentage” 200 vboxmanage setextradata XP11 "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" "1" Both commands haven't work as the time is always set to the exact same as Ubuntu's. I've located the extradata entry in the virtualbox XML file. It states both changes that the commands above are set to make. But of course it still hasn't stopped updating the time. <ExtraData> <ExtraDataItem name="GUI/LastGuestSizeHint" value="1152,864"/> <ExtraDataItem name="GUI/LastNormalWindowPosition" value="74,52,1152,911"/> <ExtraDataItem name="VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" value="1"/> <ExtraDataItem name="&#x201C;VBoxInternal/TM/WarpDrivePercentage&#x201D;" value="200"/> </ExtraData>

    Read the article

  • Count seconds and minutes with MCU timer/interrupt?

    - by arynhard
    I am trying to figure out how to create a timer for my C8051F020 MCU. The following code uses the value passed to init_Timer2() with the following formula: 65535-(0.1 / (12/2000000)=48868. I set up the timer to count every time it executes and for every 10 counts, count one second. This is based on the above formula. 48868 when passed to init_Timer2 will produce a 0.1 second delay. It would take ten of them per second. However, when I test the timer it is a little fast. At ten seconds the timer reports 11 seconds, at 20 seconds the timer reports 22 seconds. I would like to get as close to a perfect second as I can. Here is my code: #include <compiler_defs.h> #include <C8051F020_defs.h> void init_Clock(void); void init_Watchdog(void); void init_Ports(void); void init_Timer2(unsigned int counts); void start_Timer2(void); void timer2_ISR(void); unsigned int timer2_Count; unsigned int seconds; unsigned int minutes; int main(void) { init_Clock(); init_Watchdog(); init_Ports(); start_Timer2(); P5 &= 0xFF; while (1); } //============================================================= //Functions //============================================================= void init_Clock(void) { OSCICN = 0x04; //2Mhz //OSCICN = 0x07; //16Mhz } void init_Watchdog(void) { //Disable watchdog timer WDTCN = 0xDE; WDTCN = 0xAD; } void init_Ports(void) { XBR0 = 0x00; XBR1 = 0x00; XBR2 = 0x40; P0 = 0x00; P0MDOUT = 0x00; P5 = 0x00; //Set P5 to 1111 P74OUT = 0x08; //Set P5 4 - 7 (LEDs) to push pull (Output) } void init_Timer2(unsigned int counts) { CKCON = 0x00; //Set all timers to system clock divided by 12 T2CON = 0x00; //Set timer 2 to timer mode RCAP2 = counts; T2 = 0xFFFF; //655535 IE |= 0x20; //Enable timer 2 T2CON |= 0x04; //Start timer 2 } void start_Timer2(void) { EA = 0; init_Timer2(48868); EA = 1; } void timer2_ISR(void) interrupt 5 { T2CON &= ~(0x80); P5 ^= 0xF0; timer2_Count++; if(timer2_Count % 10 == 0) { seconds++; } if(seconds % 60 == 0 && seconds != 0) { minutes++; } }

    Read the article

  • Why doesn't TextBlock databinding call ToString() on a property whose compile-time type is an interf

    - by Jay
    This started with weird behaviour that I thought was tied to my implementation of ToString(), and I asked this question: http://stackoverflow.com/questions/2916068/why-wont-wpf-databindings-show-text-when-tostring-has-a-collaborating-object It turns out to have nothing to do with collaborators and is reproducible. When I bind Label.Content to a property of the DataContext that is declared as an interface type, ToString() is called on the runtime object and the label displays the result. When I bind TextBlock.Text to the same property, ToString() is never called and nothing is displayed. But, if I change the declared property to a concrete implementation of the interface, it works as expected. Is this somehow by design? If so, any idea why? To reproduce: Create a new WPF Application (.NET 3.5 SP1) Add the following classes: public interface IFoo { string foo_part1 { get; set; } string foo_part2 { get; set; } } public class Foo : IFoo { public string foo_part1 { get; set; } public string foo_part2 { get; set; } public override string ToString() { return foo_part1 + " - " + foo_part2; } } public class Bar { public IFoo foo { get { return new Foo {foo_part1 = "first", foo_part2 = "second"}; } } } Set the XAML of Window1 to: <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <StackPanel> <Label Content="{Binding foo, Mode=Default}"/> <TextBlock Text="{Binding foo, Mode=Default}"/> </StackPanel> </Window> in Window1.xaml.cs: public partial class Window1 : Window { public Window1() { InitializeComponent(); DataContext = new Bar(); } } When you run this application, you'll see the text only once (at the top, in the label). If you change the type of foo property on Bar class to Foo (instead of IFoo) and run the application again, you'll see the text in both controls.

    Read the article

  • Grub menu not waiting despite of GRUB_TIMEOUT=10

    - by Optimus
    I have Ubuntu 12.04 installed along side of windows 7. The grub menu doesn't seem obey GRUB_TIMEOUT=10, I see the grub menu there for a split second and it immediately defaults to the first option. Grub menu worked fine when I first installed ubuntu. I am not able to pinpoint what exactly broke it(maybe some update?). I did resize my ubuntu partition using gparted but am not sure if that is what caused it. here are my settings from etc/default/grub GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" How do I fix this? Edit: As suggested by 'kamil' this is what I have tried so far with no luck - 1) hold the shift key while booting 2) sudo gedit /etc/default/grub edit GRUB_TIMEOUT to `GRUB_TIMEOUT=10` sudo update-grub 3) sudo gedit /etc/default/grub edit GRUB_TIMEOUT to `GRUB_TIMEOUT=10` sudo update-grub2 4) at the end of your /etc/grub.d/00_header file, comment out the if condition except for the regular set timeout line like this: #if [ \${recordfail} = 1 ]; then # set timeout=-1 #else set timeout=${GRUB_TIMEOUT} #fi then sudo update-grub and sudo update-grub2 5) install boot repair sudo add-apt-repository ppa:yannubuntu/boot-repair sudo apt-get update sudo apt-get install -y boot-repair boot-repair boot-repair output - Boot successfully repaired. ... The boot files of [The OS now in use - Ubuntu 12.04.1 LTS] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, 200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition) http://paste.ubuntu.com/1220468/ - here is the full boot-repair data Could grub files not being at the start of the disk create such issues?

    Read the article

  • Publishing/subscribing multiple subsets of the same server collection

    - by matb33
    How does one go about publishing different subsets (or "views") of a single collection on the server as multiple collections on the client? Here is some pseudo-code to help illustrate my question: items collection on the server Assume that I have an items collection on the server with millions of records. Let's also assume that: 50 records have the enabled property set to true, and; 100 records have the processed property set to true. All others are set to false. items: { "_id": "uniqueid1", "title": "item #1", "enabled": false, "processed": false }, { "_id": "uniqueid2", "title": "item #2", "enabled": false, "processed": true }, ... { "_id": "uniqueid458734958", "title": "item #458734958", "enabled": true, "processed": true } Server code Let's publish two "views" of the same server collection. One will send down a cursor with 50 records, and the other will send down a cursor with 100 records. There are over 458 million records in this fictitious server-side database, and the client does not need to know about all of those (in fact, sending them all down would probably take several hours in this example): var Items = new Meteor.Collection("items"); Meteor.publish("enabled_items", function () { // Only 50 "Items" have enabled set to true return Items.find({enabled: true}); }); Meteor.publish("processed_items", function () { // Only 100 "Items" have processed set to true return Items.find({processed: true}); }); Client code In order to support the latency compensation technique, we are forced to declare a single collection Items on the client. It should become apparent where the flaw is: how does one differentiate between Items for enabled_items and Items for processed_items? var Items = new Meteor.Collection("items"); Meteor.subscribe("enabled_items", function () { // This will output 50, fine console.log(Items.find().count()); }); Meteor.subscribe("processed_items", function () { // This will also output 50, since we have no choice but to use // the same "Items" collection. console.log(Items.find().count()); }); My current solution involves monkey-patching _publishCursor to allow the subscription name to be used instead of the collection name. But that won't do any latency compensation. Every write has to round-trip to the server: // On the client: var EnabledItems = new Meteor.Collection("enabled_items"); var ProcessedItems = new Meteor.Collection("processed_items"); With the monkey-patch in place, this will work. But go into Offline mode and changes won't appear on the client right away -- we'll need to be connected to the server to see changes. What's the correct approach?

    Read the article

  • Updating, etc., automatically

    - by Steve D
    Is there a way to set up Ubuntu 12.04 (or earlier versions) so that all recommended updates are done automatically, say once a week? When I say automatically, I mean no password entry or user intervention required. This sounds like a stupid request, so let me tell why I'm asking. My grandfather knows nothing about computers; he uses his solely to read Yahoo! mail. I want to get rid of his clunky, spyware-ridden Windows XP and install Ubuntu. I want to set it up so when he turns the computer on, after a couple minutes, voila!, Yahoo! mail, already signed in, ready to go. The problem is I don't want to have to go over there every week or so and make sure everything is up-to-date, he hasn't accidentally installed any spyware, etc. So can this be done? Is this the best way to set things up for my grandfather? Are there other things I should be worried about when it comes to keeping things hassle-free for him? Please don't post anything like "why not teach him how to... blah blah blah". My grandfather is 80 years old and has made it clear email is the only thing he will ever use a computer for! Thanks!

    Read the article

  • Setting up a LAMP VM server for Development and Testing?

    - by TdotThomas
    Info: I would like to set up a VM server on my local computer which will serve pages in the exact same way as my current hosting (but only to me on my local computer). I currently pay a big web hosting company to host my website & web store and they are doing a great job, but I would like to be able to work on my Web site and its corresponding MySQL DB, HTML, and PHP code without being at risk of messing something completely up on the live servers. My current plan of action: Set up a VM webserver with Debian, MySQL, PHP, Apache. Copy web store (PHP/HTML) code to VM server. Copy my current MySQL databases from my hosting provider and install on VM server. Modify and test new features on VM server. Upload MySQL DB and HTML/PHP code back to web host's server where it should work as before but with new modifications. Questions: Now I'm pretty sure I have steps one and two down correctly but I can't for the life of me figure out how to proceed next, so here are my questions. I have my /etc/host file set up so www.MySite.test redirects to the IP address of the local VM webserver. Once I import my PHP/HTML files and MySQL file whats the best way to navigate around the fact that all of my files and DBs will reference www.MySite.com. I can export my MySQL dbs but do I also have to export my MySQL users and passwords to access those db or are those coded into my html/php code?

    Read the article

  • How to infer the type of a derived class in base class?

    - by enzi
    I want to create a method that allows me to change arbitrary properties of classes that derive from my base class, the result should look like this: SetPropertyValue("size.height", 50); – where size is a property of my derived class and height is a property of size. I'm almost done with my implementation but there's one final obstacle that I want to solve before moving on, to describe this I will first have to explain my implementation a bit: Properties that can be modified are decorated with an attribute There's a method in my base class that searches for all derived classes and their decorated properties For each property I generate a "property modifier", a class that contains 2 delegates: one to set and one to get the value of the property. Property Modifiers are stored in a dictionary, with the name of the property as key In my base class, there is another dictionary that contains all property-modifier-dictionaries, with the Type of the respective class as key. What the SetPropertyValue method does is this: Get the correct property-modifier-dictionary, using the concrete type of the derived class (<- yet to solve) Get the property modifier of the property to change (e.g. of the property size) Use the get or set delegate to modify the property's value Some example code to clarify further: private static Dictionary<RuntimeTypeHandle, object> EditableTypes; //property-modifier-dictionary protected void SetPropertyValue<T>(EditablePropertyMap<T> map, string property, object value) { var property = map[property]; // get the property modifier property.Set((T)this, value); // use the set delegate (encapsulated in a method) } In the above code, T is the Type of the actual (derived) class. I need this type for the get/set delegates. The problem is how to get the EditablePropertyMap<T> when I don't know what T is. My current (ugly) solution is to pass the map in an overriden virtual method in the derived class: public override void SetPropertyValue(string property, object value) { base.SetPropertyValue((EditablePropertyMap<ExampleType>)EditableTypes[typeof(ExampleType)], property, value); } What this does is: get the correct dictionary containing the property modifiers of this class using the class's type, cast it to the appropiate type and pass it to the SetPropertyValue method. I want to get rid of the SetPropertyValue method in my derived class (since there are a lot of derived classes), but don't know yet how to accomplish that. I cannot just make a virtual GetEditablePropertyMap<T> method because I cannot infer a concrete type for T then. I also cannot acces my dictionary directly with a type and retrieve an EditablePropertyMap<T> from it because I cannot cast to it from object in the base class, since again I do not know T. I found some neat tricks to infere types (e.g. by adding a dummy T parameter), but cannot apply them to my specific problem. I'd highly appreciate any suggestions you may have for me.

    Read the article

  • Using Queries with Coherence Read-Through Caches

    - by jpurdy
    Applications that rely on partial caches of databases, and use read-through to maintain those caches, have some trade-offs if queries are required. Coherence does not support push-down queries, so queries will apply only to data that currently exists in the cache. This is technically consistent with "read committed" semantics, but the potential absence of data may make the results so unintuitive as to be useless for most use cases (depending on how much of the database is held in cache). Alternatively, the application itself may manually "push down" queries to the database, either retrieving results equivalent to querying the cache directly, or may query the database for a key set and read the values from the cache (relying on read-through to handle any missing values). Obviously, if the result set is too large, reading through the cache may cause significant thrashing. It's also worth pointing out that if the cache is asynchronously synchronized with the database (perhaps via database change listener), that an application may commit a transaction to the database, then generate a key set from the database via a query, then read cache entries through the cache, possibly resulting in a race condition where the application sees older data than it had previously committed. In theory this is not problematic but in practice it is very unintuitive. For this reason it often makes sense to invalidate the cache when updating the database, forcing the next read-through to update the cache.

    Read the article

  • PHP Based session variable not retaining value. Works on localhost, but not on server.

    - by Foo
    I've been trying to debug this problem for many hours, but to no avail. I've been using PHP for many years, and got back into it after long hiatus, so I'm still a bit rusty. Anyways, my $_SESSION vars are not retaining their value for some reason that I can't figure out. The site worked on localhost perfectly, but uploading it to the server seemed to break it. First thing I checked was the PHP.ini server settings. Everything seems fine. In fact, my login system is session based and it works perfectly. So now that I know $_SESSIONS are working properly and retaining their value for my login, I'm presuming the server is setup and the problem is in my script. Here's a stripped version of the code that's causing a problem. $type, $order and $style are not being retained after they are set via a GET variable. The user clicks a link, which sets a variable via GET, and this variable is retained for the remainder of their session. Is there some problem with my logic that I'm not seeing? <?php require_once('includes/top.php'); //first line includes a call to session_start(); require_once('includes/db.php'); $type = filter_input(INPUT_GET, 't', FILTER_VALIDATE_INT); $order = filter_input(INPUT_GET, 'o', FILTER_VALIDATE_INT); $style = filter_input(INPUT_GET, 's', FILTER_VALIDATE_INT); /* According to documentation, filter_input returns a NULL when variables are undefined. So, if t, o, or s are not set via URL, $type, $order and $style will be NULL. */ print_r($_SESSION); /* All other sessions, such as the login session, etc. are displayed here. After the sessions are set below, they are displayed up here to... simply with no value. This leads me to believe the problem is with the code below, perhaps? */ // If $type is not null (meaning it WAS set via the get method above) // or it's false because the validation failed for some reason, // then set the session to the $type. I removed the false check for simplicity. // This code is being successfully executed, and the data is being stored... if(!is_null($type)) { $_SESSION['type'] = $type; } if(!is_null($order)) { $_SESSION['order'] = $order; } if(!is_null($style)) { $_SESSION['style'] = $style; } $smarty->display($template); ?> If anyone can point me in the right direction, I'd greatly appreciate it. Thanks.

    Read the article

  • How do I cluster strings based on a relation between two strings?

    - by Tom Wijsman
    If you don't know WEKA, you can try a theoretical answer. I don't need literal code/examples... I have a huge data set of strings in which I want to cluster the strings to find the most related ones, these could as well be seen as duplicate. I already have a set of couples of string for which I know that they are duplicate to each other, so, now I want to do some data mining on those two sets. The result I'm looking for is a system that would return me the possible most relevant couples of strings for which we don't know yet that they are duplicates, I believe that I need clustering for this, which type? Note that I want to base myself on word occurrence comparison, not on interpretation or meaning. Here is an example of two string of which we know they are duplicate (in our vision on them): The weather is really cold and it is raining. It is raining and the weather is really cold. Now, the following strings also exist (most to least relevant, ignoring stop words): Is the weather really that cold today? Rainy days are awful. I see the sunshine outside. The software would return the following two strings as most relevant, which aren't known to be duplicate: The weather is really cold and it is raining. Is the weather really that cold today? Then, I would mark that as duplicate or not duplicate and it would present me with another couple. How do I go to implement this in the most efficient way that I can apply to a large data set?

    Read the article

  • JSON datetime to SQL Server database via WCF

    - by moikey
    I have noticed a problem over the past couple of days where my dates submitted to an sql server database are wrong. I have a webpage, where users can book facilities. This webpage takes a name, a date, a start time and an end time(BookingID is required for transactions but generated by database), which I format as a JSON string as follows: {"BookingEnd":"\/Date(2012-26-03 09:00:00.000)\/","BookingID":1,"BookingName":"client test 1","BookingStart":"\/Date(2012-26-03 10:00:00.000)\/","RoomID":4} This is then passed to a WCF service, which handles the database insert as follows: [WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, UriTemplate = "createbooking")] void CreateBooking(Booking booking); [DataContract] public class Booking { [DataMember] public int BookingID { get; set; } [DataMember] public string BookingName { get; set; } [DataMember] public DateTime BookingStart { get; set; } [DataMember] public DateTime BookingEnd { get; set; } [DataMember] public int RoomID { get; set; } } Booking.svc public void CreateBooking(Booking booking) { BookingEntity bookingEntity = new BookingEntity() { BookingName = booking.BookingName, BookingStart = booking.BookingStart, BookingEnd = booking.BookingEnd, RoomID = booking.RoomID }; BookingsModel model = new BookingsModel(); model.CreateBooking(bookingEntity); } Booking Model: public void CreateBooking(BookingEntity booking) { using (var conn = new SqlConnection("Data Source=cpm;Initial Catalog=BookingDB;Integrated Security=True")) using (var cmd = conn.CreateCommand()) { conn.Open(); cmd.CommandText = @"IF NOT EXISTS ( SELECT * FROM Bookings WHERE BookingStart = @BookingStart AND BookingEnd = @BookingEnd AND RoomID= @RoomID ) INSERT INTO Bookings ( BookingName, BookingStart, BookingEnd, RoomID ) VALUES ( @BookingName, @BookingStart, @BookingEnd, @RoomID )"; cmd.Parameters.AddWithValue("@BookingName", booking.BookingName); cmd.Parameters.AddWithValue("@BookingStart", booking.BookingStart); cmd.Parameters.AddWithValue("@BookingEnd", booking.BookingEnd); cmd.Parameters.AddWithValue("@RoomID", booking.RoomID); cmd.ExecuteNonQuery(); conn.Close(); } } This updates the database but the time ends up "1970-01-01 00:00:02.013" each time I submit the date in the above json format. However, when I do a query in SQL server management studio with the above date format ("YYYY-MM-DD HH:MM:SS.mmm"), it inserts the correct values. Also, if I submit a millisecond datetime to the wcf, the correct date is being inserted. The problem seems to be with the format I am submitting. I am a little lost with this problem. I don't really see why it is doing this. Any help would be greatly appreciated. Thanks.

    Read the article

  • Noise Canceling Earphones

    - by Mark Treadwell
    I travel a lot. The hours spent droning through the sky can be made more tolerable with an MP3 player and a set of noise-cancelling headphones. Reducing the sound of the airflow and engines is a great relief. For a year or two, I used a pair of folding Sony MDR-NC5 Noise Canceling Headphones, the ear foam covers self-destructed. I replaced them with old washcloth material and was happy, but the DW thought it looked bad.  I switched to a new set of Sony MDR-NC6 Noise Canceling Headphones.  These worked equally well, although they did not fold as small as the MDR-NC5 headphones. Over four years of use, the MDR-NC6 headphones started cutting out and making popping noises.  This was not surprising considering the beating they took on travel in my backpack.  It looked like I needed another new set. The older MDR-NC5 headphones were still on the shelf with the hated washcloth covers.  A quick search online showed a vibrant business in selling replacement ear foams, often at exorbitant prices.  Nowhere did I see ear foam covers made for the oblong MDR-NC5.  I then realized that foam is stretchable and that the shape should not matter.  After another search and some consideration, I purchased 2-5/16" foam pad ear covers that were able to stretch over the MDR-NC5's strange shape.  Problem solved for less than $5.

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • How can I use an object pool for optimization in AndEngine?

    - by coder_For_Life22
    I have read up on a tutorial that allows you to reuse sprites that are re-added to the scene such as bullets from a gun or any other objects using an ObjectPool. In my game i have a variation of sprites about 6 all together with different textures. This is how the object pool is set up with its own class extending Java's GenericPool class public class BulletPool extends GenericPool<BulletSprite> { private TextureRegion mTextureRegion; public BulletPool(TextureRegion pTextureRegion) { if (pTextureRegion == null) { // Need to be able to create a Sprite so the Pool needs to have a TextureRegion throw new IllegalArgumentException("The texture region must not be NULL"); } mTextureRegion = pTextureRegion; } /** * Called when a Bullet is required but there isn't one in the pool */ @Override protected BulletSprite onAllocatePoolItem() { return new BulletSprite(mTextureRegion); } /** * Called when a Bullet is sent to the pool */ @Override protected void onHandleRecycleItem(final BulletSprite pBullet) { pBullet.setIgnoreUpdate(true); pBullet.setVisible(false); } /** * Called just before a Bullet is returned to the caller, this is where you write your initialize code * i.e. set location, rotation, etc. */ @Override protected void onHandleObtainItem(final BulletSprite pBullet) { pBullet.reset(); } } As you see here it takes a TextureRegion parameter. The only problem i am facing with this is that i need to have 6 different sprites recycled and reused in the ObjectPool. This ObjectPool is set up to only use one TextureRegion. Any idea's or suggestions on how to do this?

    Read the article

  • HDMI & Display Port stopped work on 11.10

    - by dizzy
    After upgraded two laptops to 11.10, HDMI and Display ports stopped to work. Symptoms on each (btw. it used to work with 11.04 on both): laptop Dell Inspiron 1525 (HDMI, Intel GMX 3100): after HDMI cable is plugged in, screen is corrupted (no panel, no icons), system is unresponsive, TV set receives some signal, but only blue screen and some regular ticks can be heard. Unplugging the cable system recovers. No logs were checked. Thinkpad W510 (DisplayPort, NVidia). Simple "Screens" utility does not recognizes TV set, but this is something to do with the differences between Nvidia driver API and the one expected from the utility, as far I could spot on the net. However, using Nvidia-settings, TV is recognized, but cannot be enabled and used. Beside that, touch pad freezes after HDMI2DisplayPort connector is plugged in the laptop (not immediately, but after few seconds - probably after some handshake with the TV set crashes). It is strange that no such bug reports can be found on the net. So, I guess it is something wrong on our laptops only, but would appreciate some hints (i.e. any known changes recently related to HDMI, Display Port, X-Windows, kernel... wherever I should take a look and fix the issue).

    Read the article

  • ADF Logging In Deployed Apps

    - by Duncan Mills
    Harking back to my series on using the ADF logger and the related  ADF Insider Video, I've had a couple of queries this week about using the logger from Enterprise Manager (EM). I've alluded in those previous materials to how EM can be used but it's evident that folks need a little help.  So in this article, I'll quickly look at how you can switch logging on from the EM console for an application and how you can view the output.  Before we start I'm assuming that you have EM up and running, in my case I have a small test install of Fusion Middleware Patchset 5 with an ADF application deployed to a managed server. Step 1 - Select your Application In the EM navigator select the app you're interested in: At this point you can actually bring up the context ( right mouse click) menu to jump to the logging, but let's do it another way.  Step 2 - Open the Application Deployment Menu At the top of the screen, underneath the application name, you'll find a drop down menu which will take you to the options to view log messages and configure logging, thus: Step 3 - Set your Logging Levels  Just like the log configuration within JDeveloper, we can set up transient or permanent (not recommended!) loggers here. In this case I've filtered the class list down to just oracle.demo, and set the log level to config. You can now go away and do stuff in the app to generate log entries. Step 4 - View the Output  Again from the Application Deployment menu we can jump to the log viewer screen and, as I have here, start to filter down the logging output to the stuff you're interested in.  In this case I've filtered by module name. You'll notice here that you can again look at related log messages. Importantly, you'll also see the name of the log file that holds this message, so it you'd rather analyse the log in more detail offline, through the ODL log analyser in JDeveloper, then you can see which log to download.

    Read the article

  • Monday at Oracle OpenWorld 2012 - Must See Session: “Using the Right Tools, Techniques, and Technologies for Integration Projects”

    - by Lionel Dubreuil
    Don’t miss this “CON8669 - Using the Right Tools, Techniques, and Technologies for Integration Projects“ session with Timothy Hall - Sr. Director, Oracle: Date: Monday, Oct 1, Time: 3:15 PM - 4:15 PM Location: Moscone South - 308 Every integration project brings its own unique set of challenges. There are many tools and techniques to choose from. How do you ensure that you have a means of consistently and repeatedly making decisions about which tools, techniques, and technologies are used? In working with many customers around the globe, Oracle has developed a set of criteria to help evaluate a variety of common integration questions. This session explores these criteria and how they have been further organized into decision trees that offer a repeatable means for ensuring that project teams are given the same guidance from project to project. Using these techniques, the presentation shows how you can reduce risk and speed productivity for your projects Objectives for this session are to: Discuss common questions that arise at the start of integration projects Review various decision criteria and approaches for getting to a consistent set of answers Explore how these techniques can be used to reduce risk and speed productivity Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Why is my Type.GetFields(BindingFlags.Instance|BindingFlags.Public) not working?

    - by granadaCoder
    My code can see the non-public members, but not the public ones. Why? FieldInfo[] publicFieldInfos = t.GetFields(BindingFlags.Instance | BindingFlags.Public); is returning nothing. Note: I'm trying to get at the properties on the abstract class as well as the concrete class. (And read the attributes as well). The MSDN example works with the 2 flags (BindingFlags.Instance | BindingFlags.Public) but my mini inheritance example below does not. private void RunTest1() { try { textBox1.Text = string.Empty; Type t = typeof(MyInheritedClass); //Look at the BindingFlags *** NonPublic *** int fieldCount = 0; while (null != t) { fieldCount += t.GetFields(BindingFlags.Instance | BindingFlags.NonPublic).Length; FieldInfo[] nonPublicFieldInfos = t.GetFields(BindingFlags.Instance | BindingFlags.NonPublic); foreach (FieldInfo field in nonPublicFieldInfos) { if (null != field) { Console.WriteLine(field.Name); } } t = t.BaseType; } Console.WriteLine("\n\r------------------\n\r"); //Look at the BindingFlags *** Public *** t = typeof(MyInheritedClass); FieldInfo[] publicFieldInfos = t.GetFields(BindingFlags.Instance | BindingFlags.Public); foreach (FieldInfo field in publicFieldInfos) { if (null != field) { Console.WriteLine(field.Name); object[] attributes = field.GetCustomAttributes(t, true); if (attributes != null && attributes.Length > 0) { foreach (Attribute att in attributes) { Console.WriteLine(att.GetType().Name); } } } } } catch (Exception ex) { ReportException(ex); } } private void ReportException(Exception ex) { Exception innerException = ex; while (innerException != null) { Console.WriteLine(innerException.Message + System.Environment.NewLine + innerException.StackTrace + System.Environment.NewLine + System.Environment.NewLine); innerException = innerException.InnerException; } } public abstract class MySuperType { public MySuperType(string st) { this.STString = st; } public string STString { get; set; } public abstract string MyAbstractString { get; set; } } public class MyInheritedClass : MySuperType { public MyInheritedClass(string ic) : base(ic) { this.ICString = ic; } [Description("This is an important property"), Category("HowImportant")] public string ICString { get; set; } private string _oldSchoolPropertyString = string.Empty; public string OldSchoolPropertyString { get { return _oldSchoolPropertyString; } set { _oldSchoolPropertyString = value; } } [Description("This is a not so importarnt property"), Category("HowImportant")] public override string MyAbstractString { get; set; } }

    Read the article

  • Can I dynamicaly size a div based on discreet units and still center? [closed]

    - by Dave
    Here's the problem: I have a website I'm working on that, depending on what the user has selected, will pop up a different number of boxes 80px high and 200px wide and currently set to float:left. These boxes are contained within a div that is basically the whole width of the screen minus some 1% margins. So at the moment they all fill in the box and, depending on screen size, occupy a grid of variable height and width. The problem is, if the screen size makes the containing box, say, 700px wide then you end up with 3 boxes per row and a bloody big margin on the right. What I would like to do is center the grid of boxes inside the containing box so that the margins are equal left and right. I suspect this can't be done since it means the containing box needs to set its size by looking both at the size of the user's window as well as the size of its children. It would be easy to do with javascript but I'd prefer not to if that is an option. If it is truly impossible then I will simply script it and let non-js users see a left-justified set of boxes.

    Read the article

  • How to recover lost files after an install

    - by Gentry McColm
    I'm a newbie learning along the way. I recently installed a 2nd hdd into my ubuntu box. Have one of about 160g which runs ubuntu 12.04. And the new hdd was 1 tb, used for holding videos. I had set up 2nd drive as ext3 I believe. And set up folders on it to hold the videos. Worked great. Also thought I had set it up for auto mount. I was able to read and write on it. Etc. Computer froze, so had to reboot it. When I did, system would not reboot: hung on the Ubuntu screen with 5 dots. I hit a few buttons and the command screen showed up, indicating that my 2nd hdd would not mount. Stopped up whole system. Tried rebooting, no go. Had to reinstall ubuntu on the 1st hdd. Did not apparently touch the 2nd one. Well, when I got it up and running, my 2nd hdd mounted automatically (yeah!), but now I cannot find my videos I already had on it. I had not put any more than about 30g of videos on it, but now when I read its Properties, it says I'm using about 50g. So, I'm wondering if somewhere in that, buried, are my 17 videos. Any help in recovering this? Thanks!

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >