Search Results

Search found 6017 results on 241 pages for 'universal records managem'.

Page 30/241 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • MySQL Delete Records Older Than X Minutes?

    - by sajanNOPPIX
    I've searched quite a bit and found a few solutions that didn't end up working for me and can't understand why. I have a table with a timestamp column. The MySQL type for this column is 'datetime'. I insert into this table the following from PHP. date('Y-m-d H:i:s') This enters, what looks like the correct value for the MySQL date time. 2012-06-28 15:31:46 I want to use this column to delete rows that are older than say 10 minutes. I'm running the following query, but it's not working. It affects 0 rows. DELETE FROM adminLoginLog WHERE timestamp < (NOW() - INTERVAL 10 MINUTE); Can anyone shed some light as to what I'm doing wrong and why it's not working properly? Thanks. Update: It looks like my first issue is that I'm using DATETIME when I should be using the TIMESTAMP data type. I'm updating my code to do that now. Thanks.

    Read the article

  • In Rails, how do I find records by "not equal"

    - by Mazonowicz
    I'm building an application that contains a bunch of projects that are at various stages, and I need to list the completed projects, or the projects that are at various other stages. So to list the completed projects, I name a scope; named_scope :current, :conditions => { :current_stage => "Completed" } and use; @projects = Project.current in my controller. But I how do I find all the projects at other stages? I thought it would involve != but I can't get that to work. Any pointers very much appreciated. Thanks a lot

    Read the article

  • How can you exclude a large number of records in a cross db query using LINQ2SQL?

    - by tap
    So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported. What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x = !idList.Contains(x.Id)) clause on the remote query. This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query. Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure? Thanks in advance.

    Read the article

  • Two group of name server records, where to put them?

    - by sazary
    I've registered my domain by a registrar that has very poor DNS management tools. I need to point from my registrar to another third-party DNS manager, and then from there point to the name servers of my host, along with some other DNS records (such as SPF records). What I've done now is this: I've given the address of the name servers of my third-party DNS manager to the DNS manager of my registrar, and then I've given the address of the name servers of my host to the third-party DNS manager, along with some SPF and MX records. Is this work correct? Or should I add the NS address of my host to my registrar DNS manager too? The problem is that my domain doesn't resolve to my host, and I see some strange records in some DNS servers around the world that I have not set!

    Read the article

  • how to insert many similar records in mysql at a time using phpmyadmin?

    - by Networker
    we know that we can insert multiple records at a time using this query: INSERT INTO `TABLE1` (`First`,`Last`) VALUES ('name1','surname1'), ('name2','surname2'), ('name3','surname3'), ('name4','surname4'); but what if we want to add 1000 similar records as above (name*,surname*) do we have to write down all the records or we can use something like wildcard? or is there any other solution using mysql?

    Read the article

  • Does anyone have experience with BSODs while creating a universal image with FOG?

    - by Devator
    I want to create a Windows XP universal image to image our computers using FOG server. The FOG server part is running fine, however when creating a Windows XP universal image, it keeps crashing (BSOD) with the 0x000007b error. I believe this has todo with the Mass Storage drivers, I followed many tutorials, but each of them crashes with the 0x000007b error. FYI, I followed this tutorial (but couldn't download the files, the host is down). I was wondering if any of you had experience with this and knows how to solve my problem?

    Read the article

  • How to reference individual cells in Excel to variable data from records in an external SQL table

    - by user273476
    I have a SQL table containing date oriented financial data eg. multiple daily records with fields for Date, Account code and Value. I want to set up dynamic links (formulas) from cells in an Excel speadsheet to this data so when the spreadsheet is loaded the data is fetched from all the relevant records. The spreadsheet has the Account codes on the x axis and Dates on the y. Each day the SQL table has new data in it for the new day and I want the spreadsheet to reference this new data for the column for the new day. Any ideas? I have seen how you can generally bring in data from a SQL table (in our case using ODBC as it is not MS SQL) but the data is not simply bringing in multiple records as you would a CVS file but specific records in the SQL table referencing to specific cells and columns in the spreadsheet.

    Read the article

  • error Caused by: java.lang.OutOfMemoryError Load Image

    - by user2493770
    This is my method to load images in background, the first and second load normally. But after these loading, a memory error appears. How can I fix this? public class MainArrayAdapterViewHolder extends ArrayAdapter<EmpresaListaPrincipal> { private final Context context; private ArrayList<EmpresaListaPrincipal> data_array; public DisplayImageOptions options; public ImageLoader imageLoader = ImageLoader.getInstance(); public MainArrayAdapterViewHolder(Context context, ArrayList<EmpresaListaPrincipal> list_of_ids) { super(context, R.layout.main_list_rowlayout, list_of_ids); this.context = context; this.data_array = list_of_ids; //------------- read more here https://github.com/nostra13/Android-Universal-Image-Loader options = new DisplayImageOptions.Builder().showImageForEmptyUri(R.drawable.ic_launcher).showImageOnFail(R.drawable.ic_launcher).resetViewBeforeLoading() .cacheOnDisc().imageScaleType(ImageScaleType.IN_SAMPLE_INT).bitmapConfig(Bitmap.Config.RGB_565).delayBeforeLoading(0).build(); File cacheDir = StorageUtils.getCacheDirectory(context); ImageLoaderConfiguration config = new ImageLoaderConfiguration.Builder(context).memoryCacheExtraOptions(720, 1280) // default = device screen // dimensions .discCacheExtraOptions(720, 1280, CompressFormat.JPEG, 100).threadPoolSize(3) // default .threadPriority(Thread.NORM_PRIORITY - 1) // default .memoryCacheSize(2 * 1024 * 1024).discCache(new UnlimitedDiscCache(cacheDir)) // default .discCacheSize(50 * 1024 * 1024).discCacheFileCount(100).discCacheFileNameGenerator(new HashCodeFileNameGenerator()) // default .imageDownloader(new BaseImageDownloader(context)) // default .tasksProcessingOrder(QueueProcessingType.FIFO) // default .defaultDisplayImageOptions(options) // default .build(); imageLoader.init(config); } @Override public View getView(int position, View convertView, ViewGroup parent) { ViewHolder viewholder; View v = convertView; //Asociamos el layout de la lista que hemos creado e incrustamos el ViewHolder if(convertView == null){ LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); //View rowView = inflater.inflate(R.layout.main_list_rowlayout, parent, false); v = inflater.inflate(R.layout.main_list_rowlayout, parent, false); viewholder = new ViewHolder(); viewholder.textView_main_row_title = (TextView) v.findViewById(R.id.textView_main_row_title); viewholder.imageView_restaurant_icon = (ImageView) v.findViewById(R.id.imageView_restaurant_icon); viewholder.textView_main_row_direccion = (TextView) v.findViewById(R.id.textView_main_row_direccion); v.setTag(viewholder); } ImageLoadingListener mImageLoadingListenr = new ImageLoadingListener() { @Override public void onLoadingStarted(String arg0, View arg1) { // Log.e("* started *", String.valueOf("complete")); } @Override public void onLoadingComplete(String arg0, View arg1, Bitmap arg2) { // Log.e("* complete *", String.valueOf("complete")); } @Override public void onLoadingCancelled(String arg0, View arg1) { } @Override public void onLoadingFailed(String arg0, View arg1, FailReason arg2) { // TODO Auto-generated method stub } }; try { viewholder = (ViewHolder) v.getTag(); viewholder.textView_main_row_title.setText(data_array.get(position).getNOMBRE()); viewholder.textView_main_row_direccion.setText(data_array.get(position).getDIRECCION()); String image = data_array.get(position).getURL(); // ------- image --------- try { if (image.length() > 4) imageLoader.displayImage(image, viewholder.imageView_restaurant_icon, options, mImageLoadingListenr); } catch (Exception ex) { } //textView_main_row_title.setText(name); //textView_main_row_address.setText(address); } catch (Exception e) { // TODO: handle exception } return v; } public class ViewHolder { public TextView textView_main_row_title; public TextView textView_main_row_direccion; //public TextView cargo; public ImageView imageView_restaurant_icon; } }

    Read the article

  • Where does Rails get it's datetime for creating records?

    - by gwapEs9
    I have a rails app with a data model called 'jobs' and i'm faced with a critical design choice crossroads. I don't know enough about Rails and it's inner workings to be able to say for sure what I should do despite a complete read of the rails and ruby docs. I want to be able to accurately display the age of a job record in days. So when a customer logs in, they can see that the job they submitted is 'x' days old. Where does a rails app on Heroku get it's time stamps? From Heroku? or the customers system clock? If a customer has a out of date system clock and submits a job, it could really mess up the sorting of their job list, not to mention me the overseer of job records. Any advice out there? EDIT: Just to be clear, i'm not asking how to list jobs by their date, but to which clock does a rails app on Heroku base it's records.

    Read the article

  • How can I use SQL to select duplicate records, along with counts of related items?

    - by mipadi
    I know the title of this question is a bit confusing, so bear with me. :) I have a (MySQL) database with a Person record. A Person also has a slug field. Unfortunately, slug fields are not unique. There are a number of duplicate records, i.e., the records have different IDs but the same first name, last name, and slug. A Person may also have 0 or more associated articles, blog entries, and podcast episodes. If that's confusing, here's a diagram of the structure: I would like to produce a list of records that match this criteria: duplicate records (i.e., same slug field) for people who also have at least 1 article, blog entry, or podcast episode. I have a SQL query that will list all records with the same slug fields: SELECT id, first_name, last_name, slug, COUNT(slug) AS person_records FROM people_person GROUP BY slug HAVING (COUNT(slug) > 1) ORDER BY last_name, first_name, id; But this includes records for people that may not have at least 1 article, blog entry, or podcast. Can I tweak this to fit the second criteria?

    Read the article

  • Lazy Processing of Streams

    - by Giorgio
    I have the following problem scenario: I have a text file and I have to read it and split it into lines. Some lines might need to be dropped (according to criteria that are not fixed). The lines that are not dropped must be parsed into some predefined records. Records that are not valid must be dropped. Duplicate records may exist and, in such a case, they are consecutive. If duplicate / multiple records exist, only one item should be kept. The remaining records should be grouped according to the value contained in one field; all records belonging to the same group appear one after another (e.g. AAAABBBBCCDEEEFF and so on). The records of each group should be numbered (1, 2, 3, 4, ...). For each group the numbering starts from 1. The records must then be saved somewhere / consumed in the same order as they were produced. I have to implement this in Java or C++. My first idea was to define functions / methods like: One method to get all the lines from the file. One method to filter out the unwanted lines. One method to parse the filtered lines into valid records. One method to remove duplicate records. One method to group records and number them. The problem is that the data I am going to read can be too big and might not fit into main memory: so I cannot just construct all these lists and apply my functions one after the other. On the other hand, I think I do not need to fit all the data in main memory at once because once a record has been consumed all its underlying data (basically the lines of text between the previous record and the current record, and the record itself) can be disposed of. With the little knowledge I have of Haskell I have immediately thought about some kind of lazy evaluation, in which instead of applying functions to lists that have been completely computed, I have different streams of data that are built on top of each other and, at each moment, only the needed portion of each stream is materialized in main memory. But I have to implement this in Java or C++. So my question is which design pattern or other technique can allow me to implement this lazy processing of streams in one of these languages.

    Read the article

  • How to prevent delays associated with IPv6 AAAA records?

    - by Nic
    Our Windows servers are registering IPv6 AAAA records with our Windows DNS servers. However, we don't have IPv6 routing enabled on our network, so this frequently causes stall behaviours. Microsoft RDP is the worst offender. When connecting to a server that has a AAAA record in DNS, the remote desktop client will try IPv6 first, and won't fall back to IPv4 until the connection times out. Power users can work around this by connecting to the IP address directly. Resolving the IPv4 address with ping -4 hostname.foo always works instantly. What can I do to avoid this delay? Disable IPv6 on client? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Too many clients to ensure this is set everywhere consistently. Will cause more problems later when we finally implement IPv6. Disable IPv6 on the server? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Requires an inconvenient registry hack to disable the entire IPv6 stack. Ensuring this is correctly set on all servers is inconvenient. Will cause more problems later when we finally implement IPv6. Mask IPv6 records on the user-facnig DNS recursor? Nope, we're using NLNet Unbound and it doesn't support that. Prevent registration of IPv6 AAAA records on the Microsoft DNS server? I don't think that's even possible. At this point, I'm considering writing a script that purges all AAAA records from our DNS zones. Please, help me find a better way. UPDATE: DNS resolution is not the problem. As @joeqwerty points out in his answer, the DNS records are returned instantly. Both A and AAAA records are immediately available. The problem is that some clients (mstsc.exe) will preferentially attempt a connection over IPv6, and take a while to fall back to IPv4. This seems like a routing problem. The ping command produces a "General failure" error message because the destination address is unroutable. C:\Windows\system32>ping myhost.mydomain Pinging myhost.mydomain [2002:1234:1234::1234:1234] with 32 bytes of data: General failure. General failure. General failure. General failure. Ping statistics for 2002:1234:1234::1234:1234: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), I can't get a packet capture of this behaviour. Running this (failing) ping command does not produce any packets in Microsoft Network Monitor. Similarly, attempting a connection with mstsc.exe to a host with an AAAA record produces no traffic until it does a fallback to IPv4. UPDATE: Our hosts are all using publicly-routable IPv4 addresses. I think this problem might come down to a broken 6to4 configuration. 6to4 behaves differently on hosts with public IP addresses vs RFC1918 addresses. UPDATE: There is definitely something fishy with 6to4 on my network. When I disable 6to4 on the Windows client, connections resolve instantly. netsh int ipv6 6to4 set state disabled But as @joeqwerty says, this only masks the problem. I'm still trying to find out why IPv6 communication on our network is completely non-working.

    Read the article

  • Prevent mail flagged as spam when switching mail servers (new SPF records)?

    - by Jakobud
    For our business, we send out a significant amount of newsletter alerts to customers that sign up for it on our website. We used to send this mail directly from our web server via PHP. But because the web server limited us to the number of emails we could send per day, we purchased a VM server at a different host (that doesn't throttle email) and we are going to use that account solely for sending out the emails. Anyways, now that the SPF records are going to be different from what they used to be and the source mail server is different, what steps need to be taken to prevent these emails being flagged as spam? I know in Gmail, it's pretty smart about determining if the person actually sending the email is sending it from the server it expects (for flagging Phishing emails, etc). We don't want that to happen to our emails. Just sending a couple test emails out, Gmail's shows the SPF record saying: Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.23.176 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] So is there anything we need to do with regards to SPF records as we move forward?

    Read the article

  • Eclipse gives me a weird error when compiling...

    - by Legend
    I have this function which returns a datatype InetAddress[] public InetAddress [] lookupAllHostAddr(String host) throws UnknownHostException { Name name = null; try { name = new Name(host); } catch (TextParseException e) { throw new UnknownHostException(host); } Record [] records = null; if (preferV6) records = new Lookup(name, Type.AAAA).run(); if (records == null) records = new Lookup(name, Type.A).run(); if (records == null && !preferV6) records = new Lookup(name, Type.AAAA).run(); if (records == null) throw new UnknownHostException(host); InetAddress[] array = new InetAddress[records.length]; for (int i = 0; i < records.length; i++) { Record record = records[i]; if (records[i] instanceof ARecord) { ARecord a = (ARecord) records[i]; array[i] = a.getAddress(); } else { AAAARecord aaaa = (AAAARecord) records[i]; array[i] = aaaa.getAddress(); } } return array; } Eclipse complains that the return type should be byte[][] but when I change the return type to byte[][], it complains that the function is returning the wrong data type. I'm stuck in a loop. Does anyone know what is happening here?

    Read the article

  • fabric deploy problem

    - by alexarsh
    Hi, I'm trying to deploy a django app with fabric and get the following error: Alexs-MacBook:fabric alex$ fab config:instance=peergw deploy -H <ip> - u <username> -p <password> [192.168.2.93] run: cat /etc/issue Traceback (most recent call last): File "build/bdist.macosx-10.6-universal/egg/fabric/main.py", line 419, in main File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 37, in deploy checkup() File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 140, in checkup if not 'Ubuntu' in run('cat /etc/issue'): File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 382, in host_prompting_wrapper File "build/bdist.macosx-10.6-universal/egg/fabric/operations.py", line 414, in run File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 65, in __getitem__ File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 140, in connect File "build/bdist.macosx-10.6-universal/egg/paramiko/client.py", line 149, in load_system_host_keys File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 154, in load File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 66, in from_line File "build/bdist.macosx-10.6-universal/egg/paramiko/rsakey.py", line 61, in __init__ paramiko.SSHException: Invalid key Alexs-MacBook:fabric alex$ I can't connect to the server via ssh. What can be my problem? Regards, Arshavski Alexander.

    Read the article

  • How do i compile a static library (fat) for armv6, armv7 and i386

    - by unforgiven
    I know this question has been posed several times, but my goal is slightly different with regard to what I have found searching the web. Specifically, I am already able to build a static library for iPhone, but the final fat file I am able to build only contains arm and i386 architectures (and I am not sure to what arm refers: is v6 or v7?). I am not able to compile specifically for armv6 and armv7 and them merge both architectures using lipo. The lipo tool complains that the same architecture (arm, not armv6 or armv7) is present in both the armv6 and armv7 libraries. Can someone explain exactly how to build for armv6 and armv7, and them merge these libraries into a fat file using lipo?

    Read the article

  • Building iPhone static library for armv6 and armv7 that includes another static library

    - by Martijn Thé
    Hi, I have an Xcode project that has a "master" static library target, that includes/links to a bunch of other static libraries from other Xcode projects. When building the master library target for "Optimized (armv6 armv7)", an error occurs in the last phase, during the CreateUniversalBinary step. For each .o file of the libraries that is included by the master library, the following error is reported (for example, the FBConnectGlobal.o file): warning for architecture: armv6 same member name (FBConnectGlobal.o) in output file used for input files: /Developer_Beta/Builds/MTToolbox/MTToolbox.build/Debug-iphoneos/MTToolbox.build/Objects-normal/armv6/libMTToolbox.a(FBConnectGlobal.o) and: /Developer_Beta/Builds/MTToolbox/MTToolbox.build/Debug-iphoneos/MTToolbox.build/Objects-normal/armv7/libMTToolbox.a(FBConnectGlobal.o) due to use of basename, truncation and blank padding In the end, Xcode tells that the build has succeeded. However, when using the final static library in an application project, it won't build because it finds duplicate symbols in one part of build (armv6) and misses symbols in the other part of the build (armv7). Any ideas how to fix this? M

    Read the article

  • What is the performance impact of CSS's universal selector?

    - by Bungle
    I'm trying to find some simple client-side performance tweaks in a page that receives millions of monthly pageviews. One concern that I have is the use of the CSS universal selector (*). As an example, consider a very simple HTML document like the following: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Example</title> <style type="text/css"> * { margin: 0; padding: 0; } </head> <body> <h1>This is a heading</h1> <p>This is a paragraph of text.</p> </body> </html> The universal selector will apply the above declaration to the body, h1 and p elements, since those are the only ones in the document. In general, would I see better performance from a rule such as: body, h1, p { margin: 0; padding: 0; } Or would this have exactly the same net effect? Essentially, what I'm asking is if these rules are effectively equivalent in this case, or if the universal selector has to perform more unnecessary work that I may not be aware of. I realize that the performance impact in this example may be very small, but I'm hoping to learn something that may lead to more significant performance improvements in real-world situations. Thanks for any help!

    Read the article

  • Image loader cant load my live image url

    - by Bindhu
    In my application i need to load the images in list view, when using locale(ip ported url) then no problem all images are loading properly, But when using live url then the images are not loading, My image loader class: public class ImageLoader { MemoryCache memoryCache = new MemoryCache(); FileCache fileCache; private Map<ImageView, String> imageViews = Collections .synchronizedMap(new WeakHashMap<ImageView, String>()); ExecutorService executorService; public ImageLoader(Context context) { fileCache = new FileCache(context); executorService = Executors.newFixedThreadPool(5); } final int stub_id = R.drawable.appointeesample; public void DisplayImage(String url, ImageView imageView) { imageViews.put(imageView, url); Bitmap bitmap = memoryCache.get(url); if (bitmap != null) imageView.setImageBitmap(bitmap); else { Log.d("stub", "stub" + stub_id); queuePhoto(url, imageView); imageView.setImageResource(stub_id); } } private void queuePhoto(String url, ImageView imageView) { PhotoToLoad p = new PhotoToLoad(url, imageView); executorService.submit(new PhotosLoader(p)); } private Bitmap getBitmap(String url) { File f = fileCache.getFile(url); // from SD cache Bitmap b = decodeFile(f); if (b != null) return b; // from web try { Bitmap bitmap = null; URL imageUrl = new URL(url); HttpURLConnection conn = (HttpURLConnection) imageUrl .openConnection(); conn.setConnectTimeout(30000); conn.setReadTimeout(30000); conn.setInstanceFollowRedirects(true); InputStream is = conn.getInputStream(); BufferedInputStream bis = new BufferedInputStream(is, 81960); BitmapFactory.Options opts = new BitmapFactory.Options(); opts.inJustDecodeBounds = true; OutputStream os = new FileOutputStream(f); Utils.CopyStream(bis, os); os.close(); bitmap = decodeFile(f); Log.d("bitmap", "Bit map" + bitmap); return bitmap; } catch (Exception ex) { ex.printStackTrace(); return null; } } // decodes image and scales it to reduce memory consumption private Bitmap decodeFile(File f) { try { try { BitmapFactory.Options o = new BitmapFactory.Options(); o.inJustDecodeBounds = true; BitmapFactory.decodeStream(new FileInputStream(f), null, o); final int REQUIRED_SIZE = 200; int scale = 1; while (o.outWidth / scale / 2 >= REQUIRED_SIZE && o.outHeight / scale / 2 >= REQUIRED_SIZE) scale *= 2; BitmapFactory.Options o2 = new BitmapFactory.Options(); o2.inSampleSize = scale; return BitmapFactory.decodeStream(new FileInputStream(f), null, o2); } catch (FileNotFoundException e) { } finally { System.gc(); } return null; } catch (Exception e) { } return null; } // Task for the queue private class PhotoToLoad { public String url; public ImageView imageView; public PhotoToLoad(String u, ImageView i) { url = u; imageView = i; } } class PhotosLoader implements Runnable { PhotoToLoad photoToLoad; PhotosLoader(PhotoToLoad photoToLoad) { this.photoToLoad = photoToLoad; } @Override public void run() { if (imageViewReused(photoToLoad)) return; Bitmap bmp = getBitmap(photoToLoad.url); memoryCache.put(photoToLoad.url, bmp); if (imageViewReused(photoToLoad)) return; BitmapDisplayer bd = new BitmapDisplayer(bmp, photoToLoad); Activity a = (Activity) photoToLoad.imageView.getContext(); a.runOnUiThread(bd); } } boolean imageViewReused(PhotoToLoad photoToLoad) { String tag = imageViews.get(photoToLoad.imageView); if (tag == null || !tag.equals(photoToLoad.url)) return true; return false; } // Used to display bitmap in the UI thread class BitmapDisplayer implements Runnable { Bitmap bitmap; PhotoToLoad photoToLoad; public BitmapDisplayer(Bitmap b, PhotoToLoad p) { bitmap = b; photoToLoad = p; } public void run() { if (imageViewReused(photoToLoad)) return; if (bitmap != null) photoToLoad.imageView.setImageBitmap(bitmap); else photoToLoad.imageView.setImageResource(stub_id); } } public void clearCache() { memoryCache.clear(); fileCache.clear(); } My Live Image url for Example: https://goappointed.com/images_upload/3330Torana_Logo.JPG I have referred google but no solution is working, Thanks a lot in advance.

    Read the article

  • Transparent textbox when textbox GotFocussed Windows Phone 8.1?

    - by user3701923
    I need to have transparent textbox, in my WindowsPhone 8.1 Runtime application. I made Background="Transparent" to the textbox, so it is transparent when it is loaded. But on focus, background color changed to white. I write the following code, to make it transparent. But it doesn't run.! <TextBox Background="Transparent" GotFocus="titleBox_GotFocus" /> C# private void titleBox_GotFocus(object sender, RoutedEventArgs e) { titleBox.Background = new SolidColorBrush(Colors.Transparent); }

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • What is the Sarbanes-Oxley (SOX) Act?

    In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The SOX act targets the financial side of companies, but its impacts can be seen within the technology arena as well because it is their responsibility to store all of a company’s electronic records regardless of file type. This act specifies that all records and electronic messages must be saved for no less than five years according to SearchCIO. In addition, consequences for non-compliance are fines, imprisonment, or both. Sarbanes-Oxley Act: Rules that affect the management of Electronic records according to SearchCIO. Allowed practices regarding destruction, alteration, or falsification of records. Retention period for records storage. Best practices indicate that corporations securely store all business records using the same guidelines set for public accountants. Types of business records that need to be stored Business Records  Business Communications Including Electronic Communications References: SOXLaw: The Sarbanes-Oxley Act 2002 Retrieved May 2011 from http://www.soxlaw.com/ SearchCIO: What is Sarbanes-Oxley Act (SOX)? Retrieved May 2011 from http://searchcio.techtarget.com/definition/Sarbanes-Oxley-Act

    Read the article

  • Oracle SQL Developer: Fetching SQL Statement Result Sets

    - by thatjeffsmith
    Running queries, browsing tables – you are often faced with many thousands, if not millions, of rows. Most people are happy with looking at the first few rows. But occasionally you need to see more. SQL Developer doesn’t show you all records, all at once. Instead, it brings the records down in ‘chunks,’ or as-needed. How It Works There is a preference that tells SQL Developer how many records to get in a single request, or ‘fetch’ of records. The default is 50… So if I run a query that returns MORE than 50 rows: There’s more than 50 records in this resultset, but we have 50 in the grid to start with. We don’t know how many records are in this result set actually. To show the record count here, we actually go physically query the database with a row count type query. All we know is that the query has finished executing, and that there are rows available to go fetch. It tells us when it’s done. As you scroll through the grid, if you get to record 50 and scroll more, we’ll get 50 more records. Or, you can cheat to get to the ‘bottom’ of the result set. You can ask SQL Developer to just to get all the records at once… Once all the records have been fetched, you’ll see this: All rows fetched! A word of caution There’s a reason we have the default set to 50 and not 1000. Bringing back data can get expensive and heavy. We’ve found the best performance to be found in that 50 to 200 record range.

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >