Our new Server

Introduction


In the past, I wrote about  TransIP and how good they are This was a big mistake In 2019  their support really fucked up by giving me the wrong advice, which leads to serious disaster. Losing my Domain Name and website!

I started again from scratch and we’re back again! Not with TransIP but a far bigger, faster and better Cloud Provider: upCloud!

For half of the price, I have triple performance, RAM, SSD, and 24/7 excellent support!

 

How to highlight terms in Lucene

When you perform searches in major search engines like Google or Yandex, you input your search terms and press enter or click the search button, the results are displayed:

yandex search engine highlight keywords

You can see the title, url, and a text fragmentation of the target document, with the search terms highlighted in bold at the place they appeared. The lighlighting is generally a useful feature for users, a glimpse of the contextual informational of the submitted keywords helps users to determine whether to take a deep look at the matching document.

What we want to do?

This tutorial will show you how to do the exact same thing as major search engines with Lucene highlight package.

The documents I want to highlight is the blog posts. Most blogging software like WordPress simply present a title and a short piece of excerpt text of the content in the blog entry list page. I think is not a very optimized way to show a list of your blog posts. Why this list can not be just like the SERPs we flip through everyday in Google? It has all the information the old style has, and has valuable information that the simple list can never has.

This post solves this problem by highlighting a post based on the title and content.

The idea is simple, we will need two fields for each document, the title and the content, text will be indexed with term vector enabled, and then we should extract or manually specify the main keywords of the blog post, and then construct query with those keywords. To know more about term vector What is term vector

Usually, the title can be used as a query for a blog post, a good title should already contains the main keywords of the content. For example, this title How to read UTF8 text file into String in Java, the keywords include readUTF8javatext file. If you parse this title with QueryParser, most of them will be identified, stop words like toin will be removed if you are using a standard analyzer. For simplicity, this tutorial will use this method.

The second method is adding a new field to document, this field will contains keywords specified by the author of the content. The usually used in keywords meta tag for SEO.

The third one is extract keywords by analyzing the title and content with some kind of SEO software or WordPress SEO plugin.

With the proper query, the Lucene search highlighter will help us find the best text fragments that contains those keywords and highlight them by rendering them as bold .

In essence, this is not generally a very hard problem, given the tokenized streams which contains positional information, the query keywords, and the original text, its easy to come up an algorithm to locate the precise locations in original text and extract out text fragments by selecting texts around these locations.

Thankfully, the Lucene search highlight package already provided the optimized algorithms and solutions for us and they are easy to use.

What do we need to get highlighted text fragments?

The only requisite condition is the text of the field is stored, all the other things are optional, like term vectors, tokenized, indexed, offsets.

If you don’t store the text, make sure you can retrieve the text from the data source, the token stream will be retrieved from the index, and text will be retrieved from the data source, the text should be identical to the text get indexed.

Highlighter highlighter = new Highlighter(htmlFormatter, new QueryScorer(queryToSearch));
TokenStream tokenStream = TokenSources.getTokenStream(field, text, analyzer) ;
TextFragment[] frag = highlighter.getBestTextFragments(tokenStream, text, false, 4);

If you stored the text but not indexed, the token stream will be computed, text can be retrieved from the index. Just call another overloaded getAnyTokenStream.

Lucene already provided very convenient class and methods to generate highlighted documents, the following code snippet includes all class and methods we need in brief:

Highlighter highlighter = new Highlighter(htmlFormatter,
                    new QueryScorer(queryToSearch));
                TokenStream tokenStream = TokenSources.getAnyTokenStream(idxReader, id, "content", analyzer);
                TextFragment[] frag = highlighter.getBestTextFragments(tokenStream, text, false, 4);                    

All we need is a query and the token stream which retrieved by document id, the text content of the field which also retrieved by document id, we will get an array of text fragment by calling getBestTextFragments, the texts are ready to display as HTML.

Just make sure the text is stored, Lucene will handle all other things, if you didn’t analyze at index time, Lucene will do it for you at query time.

Step 1 Create a Gradle project with Lucene dependencies

Create a Java Quickstart Gradle project:

Gradle build file

 
apply plugin: 'java'
apply plugin: 'eclipse'
 
ext.luceneVersion= "6.0.0"
 
sourceCompatibility = 1.5
version = '1.0'
jar {
    manifest {
        attributes 'Implementation-Title': 'Gradle Quickstart', 'Implementation-Version': version
    }
}
 
repositories {
    mavenCentral()
}
 
dependencies {
    compile group: 'commons-collections', name: 'commons-collections', version: '3.2'
    testCompile group: 'junit', name: 'junit', version: '4.+'
    compile "org.apache.lucene:lucene-core:${luceneVersion}"
    compile "org.apache.lucene:lucene-analyzers-common:${luceneVersion}"
    compile "org.apache.lucene:lucene-queryparser:${luceneVersion}"
    compile "org.apache.lucene:lucene-highlighter:${luceneVersion}"
}
 
test {
    systemProperties 'property': 'value'
}
 
uploadArchives {
    repositories {
       flatDir {
           dirs 'repos'
       }
    }
}
 

This example uses the Lucene 6.0.0.

Step 2 Index and search it with the highlight

Create a new package com.makble.lucenesearchhighlight and add a new class:

 
package com.makble.lucenesearchhighlight;
 
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.UnsupportedEncodingException;
 
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.FieldType;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.FieldInfo;
import org.apache.lucene.index.IndexOptions;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.highlight.Highlighter;
import org.apache.lucene.search.highlight.InvalidTokenOffsetsException;
import org.apache.lucene.search.highlight.QueryScorer;
import org.apache.lucene.search.highlight.SimpleHTMLFormatter;
import org.apache.lucene.search.highlight.TextFragment;
import org.apache.lucene.search.highlight.TokenSources;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
 
public class Test {
 
    public static Analyzer analyzer = new StandardAnalyzer();
    public static IndexWriterConfig config = new IndexWriterConfig(
            analyzer);
    public static RAMDirectory ramDirectory = new RAMDirectory();
    public static IndexWriter indexWriter;
 
    public static String readFileString(String file) {
        StringBuffer text = new StringBuffer();
        try {
 
            BufferedReader in = new BufferedReader(new InputStreamReader(
                    new FileInputStream(new File(file)), "UTF8"));
            String line;
            while ((line = in.readLine()) != null) {
                text.append(line + "\r\n");
            }
 
        } catch (UnsupportedEncodingException e) {
            e.printStackTrace();
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }
 
        return text.toString();
    }
 
    @SuppressWarnings("deprecation")
    public static void main(String[] args) {
        Document doc = new Document(); // create a new document
 
        /**
         * Create a field with term vector enabled
         */
        FieldType type = new FieldType();
        type.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS);
        type.setStored(true);
        type.setStoreTermVectors(true);
        type.setTokenized(true);
        type.setStoreTermVectorOffsets(true);
 
        Field field = new Field("title",
                "How to read UTF8 text file into String in Java", type); //term vector enabled
        Field f = new TextField("content", readFileString("c:\\tmp\\content.txt"),
                Field.Store.YES); 
        doc.add(field);
        doc.add(f);
 
        try {
            indexWriter = new IndexWriter(ramDirectory, config);
            indexWriter.addDocument(doc);
            indexWriter.close();
 
            IndexReader idxReader = DirectoryReader.open(ramDirectory);
            IndexSearcher idxSearcher = new IndexSearcher(idxReader);
            Query queryToSearch = new QueryParser("title", analyzer).parse("read file string utf8");
            TopDocs hits = idxSearcher
                    .search(queryToSearch, idxReader.maxDoc());
            SimpleHTMLFormatter htmlFormatter = new SimpleHTMLFormatter();
            Highlighter highlighter = new Highlighter(htmlFormatter,
                    new QueryScorer(queryToSearch));
 
            System.out.println("reader maxDoc is " + idxReader.maxDoc());
            System.out.println("scoreDoc size: " + hits.scoreDocs.length);
            for (int i = 0; i < hits.totalHits; i++) {
                int id = hits.scoreDocs[i].doc;
                Document docHit = idxSearcher.doc(id);
                String text = docHit.get("content");
                TokenStream tokenStream = TokenSources.getAnyTokenStream(idxReader, id, "content", analyzer);
                TextFragment[] frag = highlighter.getBestTextFragments(tokenStream, text, false, 4);
                for (int j = 0; j < frag.length; j++) {
                    if ((frag[j] != null) && (frag[j].getScore() > 0)) {
                        System.out.println((frag[j].toString()));
                    }
                }
 
                System.out.println("start highlight the title");
                // Term vector
                text = docHit.get("title");
                tokenStream = TokenSources.getAnyTokenStream(
                        idxSearcher.getIndexReader(), hits.scoreDocs[i].doc,
                        "title", analyzer);
                frag = highlighter.getBestTextFragments(tokenStream, text,
                        false, 4);
                for (int j = 0; j < frag.length; j++) {
                    if ((frag[j] != null) && (frag[j].getScore() > 0)) {
                        System.out.println((frag[j].toString()));
                    }
                }
            }
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (ParseException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (InvalidTokenOffsetsException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}
 

In this example, I created a document with two fields, one for the title and another for the content. Notice that we enabled the term vector.

This is the output we get

 
I was trying to <B>read</B> <B>utf8</B> text from a text <B>file</B>
, <B>string</B> is just a byte array. The function can just <B>read</B> the <B>file</B> raw data to memory and reference
( new FileInputStream(new <B>File</B>(<B>file</B>)), "<B>UTF8</B>") );
            <B>String</B> line;
            while ( (line = in.readLine
. 
 
A <B>file</B>, in its nature its a byte array, even it is a text <B>file</B>. To get a <B>String</B> from the <B>file</B>
start highlight the title
How to <B>read</B> <B>UTF8</B> text <B>file</B> into <B>String</B> in Java

What it looks like in browser

Improve the code

To support search highlight, we don’t need to enable term vector or other index options manually, the only thing you will need is, the text of the field is stored.

This is the minimal things we need to prepare, highlighting needs two inputs: the text and the token stream derived from the text, the later can be calculated dynamically from the text.

Lucene will perform calculating if necessary to get all necessary information like term vectors, positions, offsets to get the highlighted text fragments.

Another thing I noticed is it doesn’t matter what you pass to the first parameter of QueryParser, the only thing matter is the query parameter you passed to the parse method.

The following generates the same results

 
Query queryToSearch = new QueryParser("asddf", analyzer).parse("read text file string utf8");
 
Query queryToSearch = new QueryParser("", analyzer).parse("read text file string utf8");
 
Query queryToSearch = new QueryParser(null, analyzer).parse("read text file string utf8");
 
Query queryToSearch = new QueryParser("title", analyzer).parse("read text file string utf8"); 

It looks like Lucene completely ignored this parameter.

To make the code more concise I refactored the code.

 
    @SuppressWarnings("deprecation")
    public static void main(String[] args) {
        buildIndex();
        DoQuery2();
    }
    public static void DoQuery2(){
        try {
            IndexReader idxReader = DirectoryReader.open(ramDirectory);
            IndexSearcher idxSearcher = new IndexSearcher(idxReader);
            Query queryToSearch = new QueryParser("asddf", analyzer).parse("read text file string utf8"); 
            SimpleHTMLFormatter htmlFormatter = new SimpleHTMLFormatter();
            Highlighter highlighter = new Highlighter(htmlFormatter,
                    new QueryScorer(queryToSearch));
 
            highLight(0, idxSearcher, idxReader, "content", highlighter);
            highLight(0, idxSearcher, idxReader, "title", highlighter);    
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (ParseException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
    public static void DoQuery(){
        try {
            IndexReader idxReader = DirectoryReader.open(ramDirectory);
            IndexSearcher idxSearcher = new IndexSearcher(idxReader);
            Query queryToSearch = new QueryParser("title", analyzer).parse("read file string utf8");
            TopDocs hits = idxSearcher.search(queryToSearch, idxReader.maxDoc());
            SimpleHTMLFormatter htmlFormatter = new SimpleHTMLFormatter();
            Highlighter highlighter = new Highlighter(htmlFormatter, new QueryScorer(queryToSearch));
 
            System.out.println("reader maxDoc is " + idxReader.maxDoc());
            System.out.println("scoreDoc size: " + hits.scoreDocs.length);
            for (int i = 0; i < hits.totalHits; i++) {
                int id = hits.scoreDocs[i].doc;
                System.out.println("doc id : " + i);
                highLight(id, idxSearcher, idxReader, "content", highlighter);    
                System.out.println("start highlight the title");
                highLight(id, idxSearcher, idxReader, "title", highlighter);    
            }
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (ParseException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
 
    public static void buildIndex () {
        Document doc = new Document(); 
 
        ieldType type = new FieldType();
        type.setStored(true); // stored is the everything you need to get highlighted
 
        Field field = new Field("title",
                "How to read UTF8 text file into String in Java", type); 
        Field f = new TextField("content", readFileString("c:\\tmp\\content.txt"),
                Field.Store.YES); 
        doc.add(field);
        doc.add(f);
 
        try {
            indexWriter = new IndexWriter(ramDirectory, config);
            indexWriter.addDocument(doc);
            indexWriter.close();
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
 
    }
 
    public static void highLight(int id, IndexSearcher idxSearcher, IndexReader idxReader, String field, Highlighter highlighter) {
        try {
            Document doc = idxSearcher.doc(id);
            String text = doc.get(field);
            TokenStream tokenStream = TokenSources.getAnyTokenStream(idxReader, id, field, analyzer);
            TextFragment[] frag = highlighter.getBestTextFragments(tokenStream, text, false, 4);
            for (int j = 0; j < frag.length; j++) {
                if ((frag[j] != null)) {
                    System.out.println("score: " + frag[j].getScore() + ", frag: " + (frag[j].toString()));
                }
            }
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (InvalidTokenOffsetsException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

We don’t even need to perform the actual searching to get highlighted text fragments if you know the document id, and the text is stored, you are ready to go. If all you indexed is a list of blog posts, you can simply loop over each document and highlight it.

The simplest highlight can be a few lines of code

 
    public static void highlight(String text, String query) {
        try {
            Query queryToSearch;
            queryToSearch = new QueryParser("", analyzer).parse(query);
            TokenStream tokenStream = TokenSources.getTokenStream( "default",text, analyzer);            
            Highlighter highlighter = new Highlighter(new SimpleHTMLFormatter(),new QueryScorer(queryToSearch));
 
            TextFragment[] frag = highlighter.getBestTextFragments(tokenStream, text, false, 4);
            for (int j = 0; j < frag.length; j++) {
                if ((frag[j] != null)) {
                    System.out.println("score: " + frag[j].getScore() + ", frag: " + (frag[j].toString()));
                }
            }
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (InvalidTokenOffsetsException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (ParseException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
 

There is no document of field or indexing involved, just highlight a piece of text based on a given query.

How I earn 500 USD a day while sleeping.

Introduction

About 15 years ago I started thinking about how to make money online using my Java programming skills. Twelve years later I live with my beautiful young wife in the Philippines and the cash keeps coming in without efforts.

How?

On the Internet, the only things that have value are websites with many users so you (or Google) can place adds. You can choose how to get paid, per click is the most popular!

What do you need?

  1. A Google account.
  2. A WebSite and/or YouTube channel (I prefer both).
  3. A computer with Java 8 or higher. Preferably with a Linux or Unix OS so we can use cron to schedule tasks.
  4. My CashGenerator.jar Java Program

The requirements 1, 2 and 3 are easy to get. 4 (My CashGenerator.jar Java Program) took me years of Research and Development to realize, test and make it work for me. I don’t want to sell it cheap or give it away for free. Please mail or Contact me for an offer.

Sounds good! What does it do?

To make your site rank higher in Search engines you need backreferences as I described in this article. My software scans the Internet for (discussion) fora and places your link(s) there automatically while you are sleeping. The task manager schedules the dates and times the program will run for you!

Windows 7 is dead! 10 will follow soon!

Introduction

I have no idea what Micro$oft is doing! They are digging their own grave by ending the support for Windows 7!

Who is waiting for the stupid tiles in Windows 10? and the unexpected updates which force you to quit what you were working on for several hours?

Luckily there are alternatives!

Personally I switched to mx  Linux because it’s much better than Windows or Mac OS in terms of speed, usability, and pricing. As you probably know, Linux is free and Open Source and so is the software!  So what’s holding you back?


I admit there’s a (short) learning curve to switch from M$ Office to LibreOffice but once you are used to it you’ll find that LibreOffice has everything that M$ Office has and even more!

Photoshop users will also be glad about the free and Open Source Gimp since it simply offers more and better functionality than it’s Adobe competitor! Also here there is a short learning curve! Photoshop cs6 users reported that it took them a day to get up and running with Gimp!

What is the future of Microsoft?

It may take some time before the industry understands. I guess 4 to 5 years before M$  is bankrupt!

Job Opening

evertwagenaar.com logo

Currently we arw looking for a Front-end developer for Philips. It is a 3 months project, 40 hours p/w, located in Eindhoven (Noord-Brabant), NETHERLANDS.

Frontend Developer

General Description:

Philips Digital Cognitive Diagnostics is a new business within the Philips HealthWorks venture organization. The venture is tasked with developing a new “software only” product called IntelliSpace Cognition. This new product, which is a class II medical device will help neurologists in assessing the cognitive performance of people with a neuro (degenerative) disease. The initial market for IntelliSpace Cognition is the US.

The venture is currently seeking for a Senior Frontend Developer. The position is based in Eindhoven, the Netherlands.

Requirements:

  • Design, test, develop, deploy, maintain and improve software assets
  • Deliver high quality code thru hands on development with attention to detail
  • Analyze/Research issues and provide solutions to quickly resolve them
  • Options to peer programming with team members
  • Actively participate in solution design with system engineers and architects

Technical:

  • At least 5 years of relevant experience in developing frontend applications
  • Experience in Agile development methodologies
  • Front-end experience: HTML5, Angular5/6, JavaScript/TypeScript, CSS3
  • Relevant experience in Cordova (iOS) hybrid application development
  • Medium experienced with cloud development (PAAS, IAAS, SAAS)
  • Medium experienced with continuous integration scripts
  • High experienced in test driven development

Nice to haves:

  • Relevant experience with TFS
  • Relevant experience with Medical Device development

Other:

  • An interest and preferable working experience in Agile development methodologies
  • Excellent communication skills (fluent in English).
  • Interest in joining a multidisciplinary, multi-cultural, multi-site team.
  • Problem solving mindset, strong analytical, conceptual and creative thinking skills
  • Fulltime available on-site (HTC)

Conditions:

The position is initially for a period of 3 months though may be extended through the course of 2020.

If you are interested, please send me your most recent CV in word-format with a small motivation on why you would fit this job. Also, can you give me an indication of your hourly rate/ current salary and your current availability?

Met vriendelijke groet, Kind regards,

 

 

 

Evert Wagenaar
  MSP Resource Strategist
  MSPIT2@sire-search.com
+31 (0) 10-3161066
 

SIRE Life Sciences® B.V. is registered under company registration number 55616011. All e-mail messages (including attachments) are given in good faith by SIRE Group Holding BV and her subsidiaries (SIRE) cannot assume any responsibility for the accuracy or reliability of the information contained in these message (including attachments), nor shall the information be construed as constituting any obligation on the part of SIRE. The information contained in messages (with attachments) from SIRE may be confidential or privileged and is only intended for the use of the receiver named

 

 

Dear Melissa,

Thanks for your request. I will start working on it and let you know when a suitable candidate has been found.

With kind regards,

Evert-Jan Wagenaar.

Show quoted text

Here’s how to deal with Nigerian scams

Introduction

I received this today and decided to answer in a proper way:🤣

Dear Mr. Ziegner,

I’m broken to hear the news about my uncle Ron. He was a nice man but unlike what you tell me, he promised me his whole capital back in 2015.
I have this on writing and therefore I thrust you to transfer the FULL AMOUNT to my account.
My bank details are as follows:
IBAN: NL67ABNA06030032490
I thrust it will be on my account next week.
If your payment won’t be there in time or incomplete I will ask my incasso bureau (Moscowitch Lawyer Office, Amsterdam, the Netherlands) to follow up with you. Such an action will generally result in doubling of the amount plus registration of your companies name on our blacklisted accounts.
I thrust you will take appropriate action.
With kind regards,
Dr. Evert Wagenaar.
Hide quoted text
On Tue, Jul 9, 2019, 7:16 PM Joerg Ziegner, <joergziegner.tg@gmail.com> wrote:

Hello, Evert Wagenaar

How are you doing today? Firstly, I must solicit your confidence in this transaction,this is by virtue of its nature as being utterly confidential and top secret.I write you before and i am writing on full details. Though I know that a transaction of this magnitude will make any one apprehensive and worried,but I am assuring you that all will be well at the end of the day.

Let me start by introducing myself properly to you.It may surprise you receiving this mail from me since there was no previous correspondence between us. My name is Barr. Joerg Ziegner, Esq. a personal Attorney to late (Dr. Ron Wagenaar)   He died in a car accident which occurred on the 1st of June 2017 , leaving no record of any family or relation, since then all my several inquiries here to locate any of my clients extended family relations has proved unsuccessful.

Thus I decided to search with his name through the public records to locate any member of his extended family hence I saw your name and decided to contact you after presenting your name to the Bank, i know you may not be biologically related to him but you have the same name it will be an easy transfer since the Bank has ask me to contact you as next of kin to inherit his estate.I decided to contact you to enable us retrieve this deposit from the Bank where it is deposited.

Before the car accident that claimed his life, he deposited a total amount $5,500.000.00 (Five Million Five Hundred Thousand United State Dollars Only) In a Bank here.

The same Bank has mandated me to present a member of his family (heir/inheritor) to make claim or the deposit will be confiscated and taken to the bureau of government as unclaimed fund. With regards to this,

I seek your consent to present you as the next of kin to the deceased so that the estate fund would be released to you and the content disbursed between us on the ratio of 45% of the total sum as gratification  for helping me champion this deal and 45% will be for me and my family while 10% I suggest will be donated to the charity organizations in any Country of your choice as these funds does not originally belong to either of us but if you feel otherwise do not hesitate to notify me.

All I require of you is your honest co-operation to enable us see this transaction through.

I guarantee that this transaction would be executed under legitimate arrangement that will protect you from any breach of the law. You can get across to me via Email for further clarification. Don’t get back to me as well even when you are not willing to collaborate with me so as to further my search for another partner.

I will be waiting for your kind response.

Kindest Regards.

Yours in legal matters,

From Hon: Joerg Ziegner,Esq.
Tel. +228-79668649.
ZIEGNER SOLICITOR & ADVOCATES.

Akossombo, Boulevard Du 30 Aout, Lomé-Togo.

Update

This Scammer has been reported to Interpol as of today 10-07-2019.

The real corrupt governements? Netherlands or the Philippines?

Introduction

I live here with my fiancé for almost a year now. I’m trying to make  a honest living by helping EU businesses to do business in Asia. At first I had my doubs by leaving to a corrupt part of the world. After my first year I adjusted my opinions.

  1. The Philippines are not unguilty. Everything is for sale if you have connections and money. Sure: Mrs Marcus owned 200 pairs of shoes; a Shame! But compare it with the Garderobe of Queen Maxima! If we could get wealth tax on this, all the financial problems of NL would be forever gone! Including the budget deficits! And wealth taxes are the only taxes the roayal family had to pay! Yes, had! Since 1973 or so, it  was decided that they could get it back!
  2. The pleasure boat of grandmum Beatrix (The green dragon or so) has annual service costs which are enough to fill the entire budget of the complete Ministery of Defence!
  3. And then wer’e not even talking about their properties and the ‘allowance’ of Princess Amalia, a little girl. She receives 15 M € a year for nothing!

Who are paying?

The hard working tax payers of the Netherlands. It’s theft in my opinion!

Conclusion

The Dutch governement loves to point their fingers to the Asians when it comes to corruption and nepotism but in fact their’e much worse!